Robot Pride Day Festival 2008

Where were YOU the night of August 4th?

Rather than opine on the state of our souls lost in the morass of tech-straction as our old relationships are replaced by Twitters, I wanted to report from the Robot Pride Day 2008 event that took place in Toronto, Canada.

The Sky Pirates held a private screening of the film I wrote and directed with their help called “The Charge of the 08.ZIYA.” That was followed by an amazing set by DJ Shine who just back from touring the world as a key member of the Nelly Furtado band. He uses plastic blocks that he waves in front of a digital eyeball to mix his sets – its quite a thing to behold.

Mysterion – PhD in ESP, freaked out audience members with his mind-tricks and Virginia D’Vine wowed and wooed the crowed with her burlesque dance that included a live boa constrictor!

For my music concert, I was accompanied for an extensive set by drummer Eric Herrmann, guitarist Pete Devlin, bassist D’arcy Maguire, singer Aimee Lynn Chadwick and tabla legend Ritesh Das – a dream band for me by any stretch of the imagination.

And now, thanks to the miracle of paperback-sized streaming video I can share the highlights with you (at YouTube, Dailymotion, Metacafe, Revver, Vimeo, MySpace – whatever video portal you desire).

Thanks for experiencing,

Love Keram and the CCP crew

Celebrate good times, come on.

Happy Robot Pride Day

People Are Robots, Too. Almost


October 28, 2003

Popular culture has long pondered the question, “If it looks like a human, walks like a human and talks like a human, is it human?” So far the answer has been no. Robots can’t cry, bleed or feel like humans, and that’s part of what makes them different.

But what if they could think like humans?

Biologically inspired robots aren’t just an ongoing fascination in movies and comic books; they are being realized by engineers and scientists all over the world. While much emphasis is placed on developing physical characteristics for robots, like functioning human-like faces or artificial muscles, engineers in the Telerobotics Research and Applications Group at NASA’s Jet Propulsion Laboratory, Pasadena, Calif., are among those working to program robots with forms of artificial intelligence similar to human thinking processes.

Why Would They Want to Do That?

“The way robots function now, if something goes wrong, humans modify their programming code and reload everything, then hope it eventually works,” said JPL robotics engineer Barry Werger. “What we hope to do eventually is get robots to be more independent and learn to adjust their own programming.”

Scientists and engineers take several approaches to control robots. The two extreme ends of the spectrum are called “deliberative control” and “reactive control.” The former is the traditional, dominant way in which robots function, by painstakingly constructing maps and other types of models that they use to plan sequences of action with mathematical precision. The robot performs these sequences like a blindfolded pirate looking for buried treasure; from point A, move 36 paces north, then 12 paces east, then 4 paces northeast to point X; thar be the gold.

The downside to this is that if anything interrupts the robot’s progress (for example, if the map is wrong or lacks detail), the robot must stop, make a new map and a new plan of actions. This re-planning process can become costly if repeated over time. Also, to ensure the robot’s safety, back-up programs must be in place to abort the plan if the robot encounters an unforeseen rock or hole that may hinder its journey.

“Reactive” approaches, on the other hand, get rid of maps and planning altogether and focus on live observation of the environment. Slow down if there’s a rock ahead. Dig if you see a big X on the ground.

The JPL Telerobotics Research and Applications Group, led by technical group supervisor Dr. Homayoun Seraji, focuses on “behavior-based control,” which lies toward the “reactive” end of the spectrum. Behavior-based control allows robots to follow a plan while staying aware of the unexpected, changing features of their environment. Turn right when you see a red rock, go all the way down the hill and dig right next to the palm tree; thar be the gold.

Behavior-based control allows the robot a great deal of flexibility to adapt the plan to its environment as it goes, much as a human does. This presents a number of advantages in space exploration, including alleviating the communication delay that results from operating distant rovers from Earth.

How Do They Do It?

Seraji’s group at JPL focuses on two of the many approaches to implementing behavior-based control: fuzzy logic and neural networks. The main difference between the two systems is that robots using fuzzy logic perform with a set knowledge that doesn’t improve; whereas, robots with neural networks start out with no knowledge and learn over time.

Fuzzy Logic

“Fuzzy logic rules are a way of expressing actions as a human would, with linguistic instead of mathematical commands; for example, when one person says to another person, ?It’s hot in here,’ the other person knows to either open the window or turn up the air conditioning. That person wasn’t told to open the window, but he or she knew a rule such as ?when it is hot, do something to stay cool,'” said Seraji, a leading expert in robotic control systems who was recently recognized as the most published author in the Journal of Robotic Systems’ 20-year history.

By incorporating fuzzy logic into their engineering technology, robots can function in a humanistic way and respond to visual or audible signals, or in the case of the above example, turn on the air conditioning when it thinks the room is hot.

Neural Networks

Neural networks are tools that allow robots to learn from their experiences, associate perceptions with actions and adapt to unforeseen situations or environments.

“The concepts of ‘interesting’ and ‘rocky’ are ambiguous in nature, but can be learned using neural networks,” said JPL robotics research engineer Dr. Ayanna Howard, who specializes in artificial intelligence and creates intelligent technology for space applications. “We can train a robot to know that if it encounters rocky surfaces, then the terrain is hazardous. Or if the rocky surface has interesting features, then it may have great scientific value.”

Neural networks mimic the human brain in that they simulate a large network of simple elements, similar to brain cells, that learn through being presented with examples. A robot functioning with such a system learns somewhat like a baby or a child does, only at a slower rate.

“We can easily tell a robot that a square is an equilateral object with four sides, but how do we describe a cat?” Werger said. “With neural networks, we can show the robot many examples of cats, and it will later be able to recognize cats in general.”

Similarly, a neural network can ‘learn’ to classify terrain if a geologist shows it images of many types of terrain and associates a label with each one. When the network later sees an image of a terrain it hasn’t seen before, it can determine whether the terrain is hazardous or safe based on its lessons.

Robotics for Today and Tomorrow

With continuous advances in robotic methods like behavior-based control, future space missions might be able to function without relying heavily on human commands. On the home front, similar technology is already used in many practical applications such as digital cameras, computer programs, dishwashers, washing machines and some car engines. The post office even uses neural networks to read handwriting and sort mail.

“Does this mean robots in the near future will think like humans? No,” Werger said. “But by mimicking human techniques, they could become easier to communicate with, more independent, and ultimately more efficient.”

JPL is a division of the California Institute of Technology in Pasadena, Calif.

August 4th, 2007. Robot Pride Day.

“Where were YOU the night of August 4th?”

Then the lights went out.

It wasn’t the same as when oil finally ran dry. Then, the streets ran with blood and power-lust flew over the people like an angry dragon, scouring for goats to eat and gold to line its nest.

This time was different. Serene. Of course there was the initial shock, fear, and panic – all endemic of such extraordinary and sudden change – though these shortly subsided; none of us expected or predicted the intense and palpable side effect of tranquility that started to grow within us. I am not a Luddite. I never hated the technology or our inextricable relationship with the Machine – but I cautioned against its effects on us as a thread in the larger weave. We were surrendering to its hypnotic and predictable effects, submitting to the comforts of its reliability and predictable nature. Despite the fact that the machines could break down, we presumed, demanded their subservient and consistent response to our manufactured needs.

When the lights went out, it was as though a huge logotherapeutic bomb had dropped and meaning returned to our small daily actions. We now found significance in the smallest gestures, within the moments in between – lying down in the grass, a passing look from a stranger, the beauty of new pen stroke upon a blank page, searching, embellishing, discovering.

Success was no longer an ever-dangling carrot that we stalked voraciously, allowing so many quiet moments to rush past as we hurtled towards our graves, lined with trophies and accolades, rather, success became the ability to stop and savor how far along the road we had traveled, as we inspired the grace of where we were and anticipated, with great enthusiasm the unknown possibilities before us.

The new robots, those that ran on the energy of the sun, still walked among us, but the threat of our obsolescence was lifted – for now we were, once again, unpredictable and filled with awe and curiosity. We walked outside of any sort of grid. We were restored to something for machines to dream about.

Robot Pride Day flyer 2007