From a distance, it could’ve looked just like a group of men hanging out on a crisp New England evening, watching their dogs play with a soccer ball. But the sound would have given it away for anyone approaching the group, given the metallic scratch hanging in the dusk air — clickclickclickclick, again and again, like the sound of camera shutters on an Oscars red carpet.
It’s not surprising that a group of MIT robotics engineers have created something dubbed the “Mini Cheetah,” nor is it surprising that a video of those lil four-legged robots running and flipping in the grass went viral earlier this month.
Clearly, people are as terrified of these squat, speedy little machines as they are fascinated. “They’ll be playing soccer with human skulls after Judgement Day,” one commenter quipped.
“The Robopocalypse is going to be totally adorable,” another added.
“Who ever was laughing isn’t getting it. This is scary,” wrote an earnest voice.
As for me? Well, my mind went to the obvious reference point: An episode of the dystopian future-tech drama Black Mirror dubbed “Metalhead,” in which human survivors in an empty, apocalyptic world are hunted by robot dogs that look damn near identical to the MIT Cheetahs. I mean, look at this scene, in which a robot sprints into the back of a van, kills its driver and “plugs” into the car’s mainframe to control it and chase a human-driven car:
The future looks terrifying. It’s also pretty close to being here. The episode’s creators were, after all, directly influenced by the robots of Boston Dynamics. The firm has also gone viral several times over for its animal-like creations, starting with the BigDog, a huge and genuinely intimidating machine that was designed using funding from the U.S. military. The team has also developed other, smaller models that are actually being put to work in civilian spaces, as with “Spot,” a medium-sized, yellow-coated robot that’s currently tapping around a construction site at San Francisco International Airport, documenting the progress of workers.
Here’s my question: Why the hell does all of this tech feel so damn creepy?
It’s not just dogs and cats but horrible little roach robots that can’t be squashed by a boot. There are near-autonomous drone swarms that seem straight out of The Matrix. Sometimes they’re modeled to look like androids, which is how we get this RoboCop-looking, pistol-toting Russian monstrosity. It’s all happening so fast that people routinely fall for fake videos of “robot testing.” Nevermind that this YouTube clip from VFX crew Corridor Digital literally ends with the robot retaliating against its human programmers: We’re so conditioned to expect robot-uprising madness that it looks about right if you’re not squinting hard enough.
This is all extremely spooky and very banal at the same time. Robots already make our worlds run, and even the viral tech we’ve seen is still controlled, via remote, by a person. We haven’t yet figured out a breakthrough in melding these cutting-edge robots with the artificial intelligence they need to resemble an independent being. And right now, a robot like Spot isn’t hunting anything down at all — the model is being used mostly for surveying gigs, where a bot is cheaper and less risky than human labor. Experts seem to agree that what comes next are equally mundane, programmable tasks like package delivery and security patrolling.
That’s not quite as scary as a Terminator-like uprising of laser-shooting skeletons. But I have to admit, I’m skeptical that our gradual acceptance of smart robots, and the conveniences they bring, won’t take on a sinister edge fast. We’ve already invited open ears into our homes in the form of Alexa, Google and Siri. Our phones themselves show our whereabouts at all times. Ring doorbells are watching and hearing more than we know. Airports, casinos and countless other high-security venues are quietly using facial recognition to search for “risks.” So are the cops, most notably in the oppressed region of Xinjiang, China.
Can you imagine the Mini Cheetah fitting into this plan to meld surveillance, communication and control with cold-blooded tools designed for efficiency? How can you not?
Welcome to the dissonance between thinking, “Aw, look at that lil robot dog,” and “Oh my God, look at that robot dog tearing my leg apart.” And, unsurprisingly, the engineers and manufacturers of this tech are keenly aware of the PR crisis. The key is developing “trust” in robot design, one firm proclaims. Engineers have experimented with dumbing down robots from their max performance in order to make us feel less creeped out about the whole thing. And some attempts to use robots have been curtailed because of a human backlash, like with these fascist Dalek-looking anti-homeless cop-bot things, which got beat up by humans in San Francisco.
Am I just a coward for wanting all robots to have the vibes of the dumb, careful little boxes that scoot around and deliver food? Is my naive fear of AI-driven murder machines blocking the view of a utopian future? I turned to the findings of U.C. Berkeley professor Stuart Russell, a leading expert on AI development, who has spoken strongly about the dangers of letting innovation happen without ethical thought.
“You can draw an analogy to what would happen if a superior alien species landed on Earth. How would we control them? And the answer is: You wouldn’t. We’d be toast. In order to not be toast, we have to take advantage of the fact that this is not an alien species, but this is something that we design,” he said in a university interview. “So how do we design machines that are going to be more intelligent and more powerful than us in such a way that they never have any power over us?”
That’s the question that we’re struggling to answer, perhaps because the scientific and tech industries aren’t yet savvy enough to implant autonomous behavior into, say, a BigDog. We’re barely starting to grasp what Russell believes is the miraculous upside of smart robots that can replace human labor: “What costs money in the end is the involvement of human beings in the process. Even the raw materials are effectively free, except for the labor costs of extraction. By having AI systems doing everything, the cost of material goods becomes essentially free,” he said.
So, yeah: Utopia. Thus is the crux of all technological advancement — there’s a best-case scenario, then there’s a worst-case scenario (say, AI tools used by nation-states to inflict mass violence), and in reality, we’ll get big doses of both. The people selling the machines will try their hardest to convince us all is well and normal. The rest of us will just have to sit and wait to see how right Black Mirror was after all.