Article Thumbnail

Can This Tech Firm Predict Hollywood’s Next Serial Abuser?

AI and social media ‘listening’ software can help companies weed out risky spokesmen, but it’s not as simple as it sounds

“The three most believable personalities are God, Walter Cronkite and Bill Cosby,” Anthony Tortorici, the director of public relations at Coca-Cola, told Black Enterprise magazine in 1981. Cosby didn’t just shill for Jell-O and Pudding Pops at the time: He was also ubiquitous in ads for computers, cars and cigars. According to Advertising Age, during Cosby’s 14-year reign over the ad industry’s public approval index, his ranking had only been surpassed by the Pope. Essentially, in the mid-1980s, Bill Cosby was viewed as the most trustworthy celebrity on Earth.  

But in 2014, amid renewed interest in the accusations of Cosby’s 60 victims, who alleged the actor drugged and raped them in incidents that stretched back to 1965, Cosby’s credibility finally collapsed and his “trustworthy” ranking dropped 2,625 spots. Like other disgraced celebrity spokespeople, Cosby didn’t just take his corporeal self down with him, but the products he’d been schilling, as well: Jell-O now seemed like a sinister snack, and forget about licking a pudding pop.  

But what if an algorithm could have predicted Cosby’s downfall and saved brands the headache of being forever associated with a predator? That’s the wild premise of Spotted Inc., founded by technologist Janet Comenos. By using proprietary algorithms that scan police records, social media chatter and biographical information of potential celebrity spokespeople, Comenos believes her company can help brands avoid #MeToo (and other) scandals of their own. For a prime example, look no further than when the Academy apparently failed to see homophobic tweets by Kevin Hart, whom they’d chosen to host the Oscars.  

Spotted joins a new crop of services providing AI and social media “listening” software not only to police workers and predict performance, but also to help companies weed out especially risky applicants like serial offenders. “The problem with making high-visibility decisions with your gut is that they can become high-visibility mistakes,” says Comenos. “Perception is reality here, and when a celebrity is convicted of a sex crime, people remember. Women, especially, are having a very difficult time forgiving them.”  

Josiah Wedgwood, founder of Wedgwood China, spearheaded the trend of using familiar faces to sell products, recruiting royals to endorse his teacups back in the 1700s. Over the next century, aristocratic endorsement by “august houses” and “houses of respute” were surefire ways for manufacturers to get their wares off dusty shelves.   

But since then, the methods that companies rely on to analyze the risk of their human assets haven’t evolved as much as you’d think. While she was working in digital advertising, Comenos was surprised to find that Fortune 500 brands like Nike, Coca-Cola and Ford often chose celebrity spokespeople not based on metrics, but on a CEO’s hunch. An avid consumer of tabloids (she says she’s deleted and reinstalled the Daily Mail app on her phone “at least 20 times”), she thought she’d try to help brands avoid potential missteps.  

When allegations of sexual abuse started to be taken seriously in Hollywood last year, four years after Spotted was founded, Comenos saw an opening. Studios were suddenly taking out “disgrace insurance,” meant to cover production costs losses on films and TV shows due to illegal behavior by the cast or crew. But what about the risks of a predator spokesperson who’s aligned with an insurance company, sports apparel brand or fast food restaurant? “When you’re a brand, many don’t know how to assess these [celebrities],” she says. “That’s where we come in.”     

Spotted also began to analyze data about how the public perceived celebrity victims and supporters of the #MeToo movement. According to a report the company leaked to Digiday, 45 percent of respondents viewed celebrities as “more attractive” after reading their story, while 59 percent viewed them as more trustworthy and 51 percent viewed them as “cooler.”

Crucial to the company’s pitch is the idea that algorithmic analysis is free from the messy prejudices of the human mind. “We take a very objective approach that has zero assumptions or preconceptions,” says Mira Carbonara, VP of Product Commercialization at Spotted. But while it would seem that an algorithm could offer an objective, impartial celebrity recommendation, experts doubt whether any software could be truly unbiased, not to mention predictive of the next Tiger Woods, Paula Deen or Lance Armstrong. “These kinds of algorithms are built on data that reflects society as a whole: the good, the bad and the ugly,” says Hany Farid, a professor of computer science at the University of California, Berkeley. “The question for these [risk-assessment] companies is, do you have safeguards in place, and are you being transparent?”

Risk-assessment algorithms can fail in dramatic fashion. The highest-profile example is Compass — a tool courts have been using to decide who gets bail, who goes to jail and who goes free. Farid and his colleague, Julia Dressel, discovered that the algorithm was far more likely to incorrectly categorize black defendants as having a high risk of reoffending, and was no more effective than a random online poll of internet strangers. The problem, Farid notes, is that the algorithm was trained on data shot through with racism. “If you’re black, you’re more likely to be arrested, charged and convicted, so of course the algorithm is going to reflect that back to you,” he notes. “If you’re using data from a society that has a troubling past, I’m concerned.”

Carbonara is quick to note that the hometown of a celebrity is given the same weight as other attributes, including age, likability and confidence level. “We’ve found that likability plays a big role in the perception of risk — and likability is totally gender and race neutral,” she says. But the likability of a star, of course, could be influenced by their class, race, gender, orientation or nationality. Farid is doubtful that prejudice could be completely factored out of such an algorithm. “We know that a lot of data can be a proxy for race, including number of crimes committed, hometown, divorce rates and other socioeconomic data that is very complex and interrelated,” he says. “Even if you say you don’t use race in your predictions — all of these other things are correlated to race.”  

Comenos openly admits she thinks that the socio-economic background of a celebrity plays a role in whether or not they commit a crime. “If a celebrity grew up in a poor area with a lot of gang activity, the likelihood that they’re involved in crime later on is higher than if they grew up in an affluent town and attended an accredited four-year university,” she says. That said, of the 24 actors and actresses accused of sexual misconduct, 13 (or 54 percent) graduated college, more than the national average of 40 percent. Additionally, many of the highest-profile stars who stand accused — including James Franco, TJ Miller, Sylvester Stallone, Louis CK, Jeffrey Tambor, Danny Masterson, Jeremy Piven, Kevin Spacey and Ben Affleck — grew up in neighborhoods where the median household income far exceeds the national average of $59,039.

One could imagine a scenario where a celebrity is denied a spokesperson gig because an algorithm used arbitrary or even prejudicial criteria to say they weren’t the right fit. The stakes, of course, are dramatically lower than in the criminal justice system. Still, Farid says, the danger of algorithms isn’t just the reinforcing of stigma, but also the fact that they convince their users that they’re more sophisticated than they really are.

Whether an algorithm could predict a future Cosby, Minority Report-style, remains to be seen. “You could say the people out there who have done some bad things are a threat to products, but what if that’s just because they’ve never had endorsements before, and they’ve never had endorsements because they don’t look like a white dude?” wonders Farid. “What I fear about these algorithms is that they don’t fix these problems, they simply reinforce them.”