Where are the Robots that Sci-Fi Movies and Books Promised?
Last Updated on May 22, 2020 by Editorial Team
Author(s): Padmaja Kulkarni
Sci-Fi movies and books promised that the future would look rather different with state of the art robots.Β Star WarsΒ bespoke C-3PO and R2-D2.Β Back To The FutureΒ gave us dreams of time travel and ubiquitous flying cars.Β The Hitchhikerβs Guide To The GalaxyΒ promised robots capable of human emotions, andΒ The JetsonsΒ robots capable of looking over the household, robots being able to cook, clean, and to be companions. It seemed likely and possible with the modern-day technological advancement and breakthroughs in the labs. And yet, we donβt even seem to be even halfway there when we look around.
In reality, we have planes that come close to flying cars, and the capability to take a human to the moon, which is quite impressive. However, on the robotics front, we have a vacuum cleaner Roomba, good at a single task, a dishwasher with no adaptive intelligence, a programmable washing machine. Additionally, we have virtual agents like Alexa or Google Home (yet, not a robot), which often misunderstands my thick accent and canβt have a conversation. So, where are the robots that Sci-fi movies and books promised?
The last two years seemed promising with the introduction of the new robotic platforms like Kuri, Nao, and Pepper, the friendly Home Robots, and TikTok, the tidying robot. However, Kuri and TikTok disappeared even before they could take off. Nao and Pepperβs applications remained restricted to some scripted tasks. Spot Mini showed some promise asΒ the robot dog capable of cleaning the house, but working with humans and in an ever-changing environment is a great challenge for it, and for now, making this robot work in your house still remains a distant future.Β Boston Dynamics created Atlas, a robot capable of hopping, running, and even backflipping. However, popular footage shows only the best attempts of the robot, not the general behavior.
So the question remains β where did we go wrong in creating the promised robots? One thing for sure, creating these robots is not as easy as it was initially envisaged. We expect too much of robots. Our fantasies come from sci-fi movies and not from ongoing and extrapolated research. We expect our robots to look like humans and have humanlike capabilities of sensing, grasping, and manipulation. Moreover, we expect them to reason with their surroundings and act autonomously. Letβs dive deep into these expectations and have a look at what the problem is.
Robots looking like humans are really difficult to manufacture because of our complex body structure. We canβt even replicate a human hand due to its intricate structure β an adaptive skin layer, the core of bones for strength, nerves for sensory information, muscles, tendon for flexible movement. Closest we came creating Sophia, but all she could do was face recognition and giving scripted answers.
Even if we replicate the joint structure of humans, adding the necessary adaptive capabilities without compromising strength remains a problem. Moreover, we canβt add rich sensing capabilities remotely close to the human body with senses like touch, vibration, pressure, and temperature at a microscopic level. This renders humanlike robots-bodies incapable of performing human-level tasks.
Grasping and manipulation are basic tasks that humans do effortlessly, even without visual aid. For robots, it needs meticulous planning due to our inability to integrate sensing capabilities and adaptive βskin.β Furthermore, we expect a robot to perform this task with 100% accuracy. Even 99% would mean that the robot drops a cup for the 99 times it picks it up, and we certainly donβt want that.
Cameras that help robots see and algorithms that use the sensory data to make sense of the surroundings are still not accurate. And even when we use multiple cameras and different types of them, like Infra-Red, expanding a robotβs field of view and information richness, processing such an enormous amount of data continuously is still a challenge. Just to put things in perspective, our brain captures and sees an object a million times before it can identify an object. The computational power and memory required to train a robot to know all surrounding objects are still unattainable.
Reasoning with surroundings and acting autonomously are some things that are developed in a human over time. Sensory capabilities and reasoning skills are complementary. Moreover, our curiosity helps us learn and explore. How to code this is still a topic of ongoing research.
Having cheap robotic platforms with rich sensor modalities is one of the hurdles for speeding up the research. One might think, why not use simulations if robots are not affordable? Well, we are not yet good at transferring skills from simulation to real robots, although current research has deemed this issue to be of immense importance. Moreover, creating a simulation environment capable of accurate object representations, especially deformable ones, is in itself challenging.
Transferring algorithms from one robot to another is also a significant challenge due to different robot morphologies, different sensor interfaces, available modalities, and accuracies. Unlike the internet, where everyone can write their webpage and customize every single page from any website to a great extent, even on oneβs PC, robotics hardware does not provide such flexibility. The solutions offered to solve problems like manipulation, grasping, and planning remains platform dependent. Most of them catered to a specific lab scenario. Taking the skills to the outside dynamic and unpredictable world is a huge challenge.
We lack both the software and hardware capabilities to create a future generation of robots. But given the speed of innovation, it seems likely to have robots capable of doing a single task autonomously and correctly. The Roomba, a robot built to clean the floor autonomously, is already doing a great job. We have teleoperated robots that go into hazardous areas and use human reasoning skills to accomplish tasks, for example, bomb defusal robots. We have robots like Da Vinci, which can enable surgeons to manipulate the surgical tools up to submillimeter accuracy and improve the success of the surgery.
We might not have a robot from the Jetsons, but we can have a robot following you in the house all day to remind you of the chores similar to Kuri. We are testing and improving autonomous cars. Social presence robots and autonomous delivery robots especially proved to be useful during this current COVID crisis and will continue to improve. It took three years, but we have aΒ robotic hand that can solve a Rubikβs cube from scratchΒ without any human help.
The Sci-Fi robotic dream might seem a bit far away, but we are walking, rather sprinting towards it with every passing day.
Where are the Robots that Sci-Fi Movies and Books Promised? was originally published in Towards AIβββMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published byΒ Towards AI