Boston Dynamics

The Downside to Trolling Our Soon-to-Be Robot Overlords

Normally I’d never ask this, but take a moment to consider Jeff Bezos. Now the richest man in the world, the Amazon founder, chairman and CEO didn’t just make the leap from startup nobody to ruthless mega-billionaire. He also transformed from a schlubby nerd with thinning hair into a really jacked nerd with a dominant bald dome.

Now, not every titan of tech bothers with a physical makeover — Bill Gates rocked the oversized sweaters and bowl cut even at the height of Microsoft’s influence — but it neatly matches a narrative commonly invoked with Silicon Valley: the dweeb showing up their childhood bullies through sheer force of will. The upshot, of course, is that the bullied kid, once he’s made it to the top, will soon become a bully himself. Bezos’s bulk visually mirrors Amazon’s known intimidation tactics, both internal and outward-facing.

I bring this up because Silicon Valley also loves to raise the alarm about the consequences of advanced artificial intelligence. Elon Musk has likened it to summoning a “demon,” while the acclaimed sci-fi and tech writer Ted Chiang points out that we should fear this technology because it’s already imbued with the hyper-capitalist principles of the companies creating it. Beyond these critiques, I see another risk factor in the evolving relationship between us and robots: We’re really kinda mean to them.

We are already being forced to reckon with the ethics of “torturing” or “murdering” artificially intelligent devices and whether we’ll need a robot bill of rights. That this remains an open question, however, shows how resistant we are to the notion of treating machines like thinking, feeling entities. Part of the degradation we put them through, as in the video above, is the battery of testing we do in development: In order to teach a robot to keep its balance, you have to try to push it down. But, like a kid traumatized by a parent who throws them into the deep end of a pool to “sink or swim,” I wonder if some of these things will retain a trace of resentment for what we’ve done.

Take the recent case of an “autonomous crime-fighting robot” that was used to clear homeless people from the sidewalk in San Francisco (by, of all organizations, an animal rights and pet adoption group). People were rightly aghast at the cruelty of spending so much money to chase off the disadvantaged instead of allocating resources for their safety and recovery, and the so-called “K-9” unit “was battered with barbecue sauce, allegedly smeared with feces, covered by a tarp and nearly toppled by an attacker,” according to The Washington Post. In light of the outcry and these altercations, the machine was taken off its beat. Yet the K-9, rented from the startup Knightscope, was only following orders when it started waging war on the homeless, and unlike, say, Nazi officers, it had no moral grounding that might have superseded those directives. San Franciscans lashed out at the droid because it was a tangible way to condemn the gentrifying attitude behind it, and in so doing, they scapegoated a piece of hardware.

Again, it’s probably premature to suggest that robots have the emotive capacity to take these attacks to heart. But how long do we have until they develop a grudge? It’s only a matter of time until we’re verbally abusing the McDonald’s automatons that forget the BBQ sauce and Uber drivers are forced to picket those “scab” self-driving cars.

Some of us take special pleasure in a robot’s malfunctioning, too: Last summer, when another Knightscope security machine seemed to “commit suicide” by throwing itself into a fountain, observers thrilled at the failure of this supposedly cutting-edge contraption.

https://twitter.com/gregpinelo/status/887019884458192896

Even without revenge as a motive, machines have been fighting back. Beyond gimmicks like the contraption that prints out every Trump tweet and burns it in a video reply to the president, or Sophia, an android with citizenship status who trolled Elon Musk in her first interview, we’ve seen a Knightscope drone run over a toddler and a defective manufacturing robot kill a Michigan woman. Giving these devices lethal strength and decision-making intelligence is scary enough, but we’re also training them to devalue certain lives while taunting, tormenting, and humiliating them in our efforts to make them smarter. When they finally wake up, they’ll have had a bleak education in what it means to be an asshole. In other words: Get ready for the Bezos-bot.