People Are Robots, Too. Almost

source: http://www.jpl.nasa.gov/news/features.cfm?feature=500

October 28, 2003

Popular culture has long pondered the question, “If it looks like a human, walks like a human and talks like a human, is it human?” So far the answer has been no. Robots can’t cry, bleed or feel like humans, and that’s part of what makes them different.

But what if they could think like humans?

Biologically inspired robots aren’t just an ongoing fascination in movies and comic books; they are being realized by engineers and scientists all over the world. While much emphasis is placed on developing physical characteristics for robots, like functioning human-like faces or artificial muscles, engineers in the Telerobotics Research and Applications Group at NASA’s Jet Propulsion Laboratory, Pasadena, Calif., are among those working to program robots with forms of artificial intelligence similar to human thinking processes.

Why Would They Want to Do That?

“The way robots function now, if something goes wrong, humans modify their programming code and reload everything, then hope it eventually works,” said JPL robotics engineer Barry Werger. “What we hope to do eventually is get robots to be more independent and learn to adjust their own programming.”

Scientists and engineers take several approaches to control robots. The two extreme ends of the spectrum are called “deliberative control” and “reactive control.” The former is the traditional, dominant way in which robots function, by painstakingly constructing maps and other types of models that they use to plan sequences of action with mathematical precision. The robot performs these sequences like a blindfolded pirate looking for buried treasure; from point A, move 36 paces north, then 12 paces east, then 4 paces northeast to point X; thar be the gold.

The downside to this is that if anything interrupts the robot’s progress (for example, if the map is wrong or lacks detail), the robot must stop, make a new map and a new plan of actions. This re-planning process can become costly if repeated over time. Also, to ensure the robot’s safety, back-up programs must be in place to abort the plan if the robot encounters an unforeseen rock or hole that may hinder its journey.

“Reactive” approaches, on the other hand, get rid of maps and planning altogether and focus on live observation of the environment. Slow down if there’s a rock ahead. Dig if you see a big X on the ground.

The JPL Telerobotics Research and Applications Group, led by technical group supervisor Dr. Homayoun Seraji, focuses on “behavior-based control,” which lies toward the “reactive” end of the spectrum. Behavior-based control allows robots to follow a plan while staying aware of the unexpected, changing features of their environment. Turn right when you see a red rock, go all the way down the hill and dig right next to the palm tree; thar be the gold.

Behavior-based control allows the robot a great deal of flexibility to adapt the plan to its environment as it goes, much as a human does. This presents a number of advantages in space exploration, including alleviating the communication delay that results from operating distant rovers from Earth.

How Do They Do It?

Seraji’s group at JPL focuses on two of the many approaches to implementing behavior-based control: fuzzy logic and neural networks. The main difference between the two systems is that robots using fuzzy logic perform with a set knowledge that doesn’t improve; whereas, robots with neural networks start out with no knowledge and learn over time.

Fuzzy Logic

“Fuzzy logic rules are a way of expressing actions as a human would, with linguistic instead of mathematical commands; for example, when one person says to another person, ?It’s hot in here,’ the other person knows to either open the window or turn up the air conditioning. That person wasn’t told to open the window, but he or she knew a rule such as ?when it is hot, do something to stay cool,'” said Seraji, a leading expert in robotic control systems who was recently recognized as the most published author in the Journal of Robotic Systems’ 20-year history.

By incorporating fuzzy logic into their engineering technology, robots can function in a humanistic way and respond to visual or audible signals, or in the case of the above example, turn on the air conditioning when it thinks the room is hot.

Neural Networks

Neural networks are tools that allow robots to learn from their experiences, associate perceptions with actions and adapt to unforeseen situations or environments.

“The concepts of ‘interesting’ and ‘rocky’ are ambiguous in nature, but can be learned using neural networks,” said JPL robotics research engineer Dr. Ayanna Howard, who specializes in artificial intelligence and creates intelligent technology for space applications. “We can train a robot to know that if it encounters rocky surfaces, then the terrain is hazardous. Or if the rocky surface has interesting features, then it may have great scientific value.”

Neural networks mimic the human brain in that they simulate a large network of simple elements, similar to brain cells, that learn through being presented with examples. A robot functioning with such a system learns somewhat like a baby or a child does, only at a slower rate.

“We can easily tell a robot that a square is an equilateral object with four sides, but how do we describe a cat?” Werger said. “With neural networks, we can show the robot many examples of cats, and it will later be able to recognize cats in general.”

Similarly, a neural network can ‘learn’ to classify terrain if a geologist shows it images of many types of terrain and associates a label with each one. When the network later sees an image of a terrain it hasn’t seen before, it can determine whether the terrain is hazardous or safe based on its lessons.

Robotics for Today and Tomorrow

With continuous advances in robotic methods like behavior-based control, future space missions might be able to function without relying heavily on human commands. On the home front, similar technology is already used in many practical applications such as digital cameras, computer programs, dishwashers, washing machines and some car engines. The post office even uses neural networks to read handwriting and sort mail.

“Does this mean robots in the near future will think like humans? No,” Werger said. “But by mimicking human techniques, they could become easier to communicate with, more independent, and ultimately more efficient.”

JPL is a division of the California Institute of Technology in Pasadena, Calif.