Article Thumbnail

The Useless Web of Surveillance Software Meant to Identify Potential School Shooters

The predictive security tool the Uvalde School District used, Social Sentinel, obviously failed to protect students as promised, raising serious questions about how effective such technology can be in preventing such tragedies

The 18-year-old shooter who killed 22 people in Uvalde, Texas, had problems at school: He was reportedly bullied for years over a speech impediment, even despite the district’s campaign to report and end harassment of students, by students. 

His few former friends described how the shooter, Salvador Ramos, had spiraled into self-harm and began lashing out at acquaintances in his later teenage years. Things were steadily getting worse at home, too, with heated fights erupting between Ramos and his mother, who allegedly used drugs and was in the process of being evicted from the home, owned by the young man’s grandparents. 

At several points, he used Instagram, Facebook Messenger and a chat app called Yubo to capture the discord in his life, posting cryptic scenes and details about his dream firearms, fights and quiet fantasies. But none of it was captured by Social Sentinel, the algorithmic surveillance system used by the Uvalde school district to track and alert authorities to violent threats. 

The school district paid just under $10,000 for Social Sentinel services in the 2019-2020 school year, per public records, and it has maintained its relationship with the service since. (During the same time period, the State of Texas paid over $2,000,000 in tax dollars to social monitoring surveillance services, with $188,855 specifically to Social Sentinel.) Overall, the company has secured contracts with school districts and colleges across 25 states, and it today claims coverage of “more than 7 million students, faculty and community members.” 

The hope is that tracking content online can help predict, flag and ultimately prevent a young person on the edge of violence from going through with their plans. But as we’ve seen in the recent Buffalo shooting, some systems aren’t designed to catch a quiet, aggrieved shooter who wants to slip through the safety net. 

The Uvalde school district consists of just 4,000 students, but in recent years the district has doubled its security budget to pay for a discrete police force (consisting of six officers and a head security guard), behavioral threat assessment staff and digital programs like Social Sentinel. These programs are supposed to be the front line of detecting a threat, yet are vulnerable to waves of false positives and unhelpful information because of the broad ways they pull publicly available information, according to experts. 

“These types of algorithmic tools have been used for years, and there’s really nothing new about them. They may improve over time, and they may be good to screen a potential case and maybe triage an individual, but [behavioral threat assessment] takes experience — an experienced individual who can see nuance,” says Eugene Rugala, an expert on security and mass violence who spent a decade as a behavioral analyst for the FBI. “It’s one thing to have a tool that flags a case, but you have to do an in-depth investigation after that. In my opinion, I’m not a big fan of relying too much on that tech.”

Per a 2018 contract signed between Social Sentinel and a Texas Education Service Center in Houston, the technology claims to monitor over “approximately a billion posts a day” on platforms including “WordPress, Vimeo, YouTube, Meetup, Tumblr, Twitter, Instagram, Google+, Flickr, Facebook Public Pages, and 4Chan.” 

The company was founded in 2014, but it gained mainstream attention following two major school shootings in 2018 — at Marjory Stoneman Douglas High School in Florida and Santa Fe High School in Texas. It was here that Social Sentinel began to secure millions of dollars in contracts with local school districts, per data reviewed by BuzzFeed News in 2019. It was around that time, too, that Social Sentinel (and its eventual parent company in 2020, Navigate360) began pouring money into political lobbying. 

Data courtesy FollowTheMoney.org, a government lobbying watchdog

The system is supposed to spot potential threats of violence that reference a school, campus or town, which is then forwarded to various authorities for follow-up and triage. But it’s also come under criticism for confusing school and law enforcement officials alike by flagging completely unrelated, innocuous messages posted on platforms like Twitter, leaving a mountain of material for people to have to sort through. Social Sentinel isn’t alone in this problem; other services like Gaggle, Securly and GoGuardian also suffer similar issues with its wide surveillance “dragnet.” 

Representatives of Social Sentinel have defended the service in media reports, suggesting the software has prevented a number of suicides and helped “identify bomb threats, ‘sexual aggression’ and bullying.” However, it’s not hard to see the glaring problems with a system that claims to be a precise tool to gauge potential violence in individuals, yet is also greatly limited in the filtering it can do. For example, in promotional documents, Social Sentinel claims to be able to detect images of firearms in pictures. However, the company’s own patented and legal descriptions limit its capabilities to scanning text for violent and threatening keywords, then matching those to either geolocation or school-specific keywords. Then there’s the ethical issues around non-consensual surveillance of students and the relationship between Social Sentinel and law enforcement, which disproportionately targets students of color for wrongdoing. 

Social Sentinel’s co-founder, Gary Margolis, has tried to defend claims of unethical monitoring by noting that the company can only look at publicly available posts. But if recent weeks taught us anything, it’s that young “lone-wolf” shooters know they’re being observed online, and can simply choose to avoid detection until the last minute. The Buffalo shooter’s public activity on social media leading up to the May 14th massacre was largely tame, and the specific plans he made for violence were posted in private Discord channels, far from the reach of any third-party monitoring service. And while the Uvalde shooter did post troubling images about his mindset and weaponry publicly, he seemingly did so without any reference that would flag him as a potential perpetrator of violence in Uvalde. (Social Sentinel didn’t respond to requests for comment by press time.)

It’s hard to fault school leaders for seeking out tools that could help protect students from mass tragedy, but these lapses suggest that what America needs isn’t new, A.I.-infused tech firms. In fact, multiple school districts that had once contracted with Social Sentinel observed to BuzzFeed News that they had found “much better intelligence” from anonymous online tip lines, which are both cheaper and more effective at finding information that’s fleeting or obscured online, as it relies on human reporting. 

The lapses also reaffirm that simple expansions in police staffing cannot stop mass violence — something that’s been found in multiple studies over the last decade and illustrated by numerous instances when law enforcement was tipped off but failed to investigate in time to prevent mass violence. In March 2016, the FBI interviewed a 20-year-old man in New Mexico after he posted to an online gaming forum asking for “weapons that are good for killing a lot of people within a budget.” Because the man didn’t have a gun and hadn’t committed a crime, the FBI closed the case. A year later, the same man walked into his former high school in Aztec, New Mexico, and opened fire

Similarly, law enforcement was aware of public threats on social media, including Instagram posts with guns and knives, a month prior to the school shooting in Parkland, where a 19-year-old walked into his former high school with a semi-automatic rifle and killed 17 people. 

Having SWAT-trained officers in the Uvalde Police Department, plus another handful of discrete school cops, didn’t really do much to stop the attack on Tuesday; indeed, the police are being questioned for their delayed, hesitant response despite a swollen police budget that consumes nearly 40 percent of Uvalde’s municipal dollars. (The UCISD Police unit didn’t respond to requests for comment on Social Sentinel alerts by press time.) 

This hasn’t stopped some observers, especially conservative pundits, from doubling-down on the need to “harden” school campuses and escalate security measures, even at the risk of hurting individual civil rights. Whether it’s more armed guards, security checkpoints, A.I. surveillance or campuses mandating “one point of entry,” the calls to fortify have grown louder. More students than ever before attend a school with a campus police officer, according to a 2018 report from the Urban Institute. 

But what might be more effective in cost and outcome may be investing in human judgment when it comes to identifying, and helping, a young person who is struggling and looking for a way to lash out. Rugala notes that the media often fails to cover how behavioral threat assessment protocols have stopped a wide range of attacks; by one report from the National Policing Institute, there are 120 cases of averted school violence between 2018 and 2020. 

The key is for a wide range of people, both in law enforcement but also in civilian life, to understand how to assess behavior and find the right channels to act on it. Rugala suggests that broad funding to train people in workplaces and schools would be a helpful step in increasing opportunities to “triage” something as nuanced as potential mass violence (“Canada mandates this in workplaces; OSHA here in the U.S. does not,” he notes). “Threat assessment has to be happening with more involvement from the community and parents. It’s one thing for school officials to be familiar, but the general public hasn’t been as involved,” Rugala says. “That’s started to change.”

The Uvalde shooter showed many signs of instability, but you can’t capture a rocky home life and growing grievances with algorithmic reporting if the algorithm can only assess the content a person produces online, not the person themselves. In turn, there was nobody in his life who understood the full picture of his pain and hatred — just bits and pieces of concerning action, observed and forgotten by people who didn’t think to prod further. It’s part of why Rugala observes that he’s frustrated when he sees another young male killer with a history of small but noteworthy red flags. “We’ve been dealing with the same kind of individual and behaviors, with some differences in outcome and the details,” he says. 

At this point, we know that some things can work sometimes: Keeping classroom doors locked during school hours may have some upside, as can gun-ownership laws that create more barriers to impulsive behavior, whether it’s waiting periods, expansions to background checks or even bans on large-capacity magazines. (“Whether a state has a large capacity ammunition magazine ban is the single best predictor of the mass shooting rates in that state,” Michael Siegel, a researcher at Boston University, told CNN after conducting a 2017 analysis.)

But catching and helping young men like the Uvalde and Buffalo shooters will require more than surveillance, gun restrictions, a “hardened” campus or an algorithm-powered police state to solve. It will take more than just demonizing and punishing people on the precipice of violence, too. “There are many cases of mass shootings where the subject was given a restraining order or fired from a job or expelled from school. But if that’s the only step, what the field has learned over the years is that it’s not enough, that these people will come back,” as author and researcher Mark Follman told NPR earlier this month.

What it all illuminates is the gaping need for human intervention, using cultural competence that can only be learned within the context of a community. Much ink has been spilled about the need for comprehensive mental health care in America, but the implication is a need for isolated, angry young people to understand that others are there and able to help, not just diagnose. Time has started to show the vulnerabilities in policing, predictive or otherwise, as the solution to mass violence. 

The temptation is for us to keep looking for a panacea, whether it’s gun control or new technology. But three decades of mass shootings suggests that this is a uniquely American problem, shaped by forces of individualism, hopelessness and existential loneliness. 

Stuck between a rock and a hard place, young people are pursuing murderous dreams made simple through effortless access to weapons and ammo. The public, meanwhile, clutches to the theatrical illusion of security through tough enforcement, hoping that a big fence and some good guys with guns can stop a killer who doesn’t want to be stopped. What needs to change is the bedrock: Behavior, and the way we acknowledge it. The Uvalde shooter was ignored for too long — and Social Sentinel, even with the backing of local police, was never the tool to help them.