Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.
But that hesitation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us. The machines have arrived.
You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!
The History of Robots
The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R., or Rossum's Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree.
R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator, The Stepford Wives, Blade Runner, etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons. (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man.
The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers. But they do agree on some general guidelines: A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously. And a robot can sense and manipulate its environment.
Robo-cabulary
Human-robot interaction
A field of robotics that studies the relationship between people and machines. For example, a self-driving car could see a stop sign and hit the brakes at the last minute, but that would terrify pedestrians and passengers alike. By studying human-robot interaction, roboticists can shape a world in which people and machines get along without breaking each other.
Singularity
The hypothetical point where the machines grow so advanced that humans are forced into a societal and existential crisis.
Multiplicity
The idea that robots and AI won’t supplant humans, but complement them.
Actuator
Typically, a combination of an electric motor and a gearbox. Actuators are what power most robots.
Soft robotics
A field of robotics that foregoes traditional materials and motors in favor of generally softer materials and pumping air or oil to move its parts.
Lidar
Lidar, or light detection and ranging, is a system that blasts a robot’s surroundings with lasers to build a 3-D map. This is pivotal both for self-driving cars and for service robots that need to work with humans without running them down.
Humanoid
The classical sci-fi robot. This is perhaps the most challenging form of robot to engineer, on account of it being both technically difficult and energetically costly to walk and balance on two legs. But humanoids may hold promise in rescue operations, where they’d be able to better navigate an environment designed for humans, like a nuclear reactor.
Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.
But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey, the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.
Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate, which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.
Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits. The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)
Today, advanced robots are popping up everywhere. For that you can thank three technologies in particular: sensors, actuators, and AI.
So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which spews lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.
Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects.
Sensors are what keep robots from running us down. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around; machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.
Within each of these robots is the next secret ingredient: the actuator, which is a fancy word for the combo electric motor and gearbox that you’ll find in a robot’s joint. It’s this actuator that determines how strong a robot is and how smoothly or not smoothly it moves. Without actuators, robots would crumple like rag dolls. Even relatively simple robots like Roombas owe their existence to actuators. Self-driving cars, too, are loaded with the things.
Actuators are great for powering massive robot arms on a car assembly line, but a newish field, known as soft robotics, is devoted to creating actuators that operate on a whole new level. Unlike mule robots, soft robots are generally squishy, and use air or oil to get themselves moving. So for instance, one particular kind of robot muscle uses electrodes to squeeze a pouch of oil, expanding and contracting to tug on weights. Unlike with bulky traditional actuators, you could stack a bunch of these to magnify the strength: A robot named Kengoro, for instance, moves with 116 actuators that tug on cables, allowing the machine to do unsettlingly human maneuvers like pushups. It’s a far more natural-looking form of movement than what you’d get with traditional electric motors housed in the joints.
And then there’s Boston Dynamics, which created the Atlas humanoid robot for the Darpa Robotics Challenge in 2013. At first, university robotics research teams struggled to get the machine to tackle the basic tasks of the original 2013 challenge and the finals round in 2015, like turning valves and opening doors. But Boston Dynamics has since that time turned Atlas into a marvel that can do backflips, far outpacing other bipeds that still have a hard time walking. (Unlike the Terminator, though, it does not pack heat.) Boston Dynamics is also working on a quadruped robot called SpotMini, which can recover in unsettling fashion when humans kick or tug on it. That kind of stability will be key if we want to build a world where we don’t spend all our time helping robots out of jams. And it’s all thanks to the humble actuator.
At the same time that robots like Atlas and SpotMini are getting more physically robust, they’re getting smarter, thanks to AI. Robotics seems to be reaching an inflection point, where processing power and artificial intelligence are combining to truly ensmarten the machines. And for the machines, just as in humans, the senses and intelligence are inseparable—if you pick up a fake apple and don’t realize it’s plastic before shoving it in your mouth, you’re not very smart. This is a fascinating frontier in robotics (replicating the sense of touch, not eating fake apples). A company called SynTouch, for instance, has developed robotic fingertips that can pick up a range of sensations, from temperature to coarseness.
As sensors are getting cheaper, the superpowered processors required for AI are doing the same. Thanks to advances in gaming and VR—graphics processing units, or GPUs, are helping mobile robots to perform complex computations right onboard the machine, as opposed to in the cloud, which means they can still operate if they lose their connection. This is particularly important for powering that machine vision, which allows a robot like Kuri to recognize your face. To help you, by the way, not hunt you or anything.
Ideally, that is.
The Future of Robots
Increasingly sophisticated machines may populate our world, but for robots to be really useful, they’ll have to become more self-sufficient. After all, it would be impossible to program a home robot with the instructions for gripping each and every object it ever might encounter. You want it to learn on its own, which is where advances in artificial intelligence come in.
Take Brett the robot. In a UC Berkeley lab, the humanoid has taught itself to conquer one of those children’s puzzles where you cram pegs into different shaped holes. It did so by trial and error through a process called reinforcement learning. No one told it how to get a square peg into a square hole, just that it needed to. So by making random movements and getting a digital reward (basically, yes, do that kind of thing again) each time it got closer to success, Brett learned something new on its own. The process is super slow, sure, but with time roboticists will hone the machines’ ability to teach themselves novel skills in novel environments, which is pivotal if we don’t want to get stuck babysitting them.