home about sciencentral contact
sciencentral news
life sciences physical sciences technology full archive
biologygeneticshealthbraineducationanimalspsychology
July 3, 2013
ScienCentral

Mind Controlled Robot


Post/Bookmark this story:

Search (Archive Only)
  Body Politics (11.03.06)

Brain Connections (06.08.04)

Robo Dancer (03.23.04)

  Fujitsu Robot Project

Toyota Partner Robots



   05.02.07
email to a friend
 
 
Video
(movie will open in a separate window)
Choose your format:
Quicktime
Realmedia

How would you like having a robot cater to your every whim, just by thinking? While that sounds like science fiction, researchers are on their way to just that. As this ScienCentral News video explains, the scientists are also getting a better understanding of how the human brain works.

Tuning in the Brain

At a small room on the University of Washington campus, a human looking robot walks a bit unsteadily toward its goal. Its slow and slightly shuffling gait, along with its size-- it's about knee-high-- is reminiscent of a child taking its first steps. The fact that the robot is playing with blocks only reinforces that impression. Nearby, like an inattentive parent, a young researcher wears what looks like a swim cap with wires and stares at what might be a computer video game.

But, it's the researchers who are taking the first steps, learning how to command the robot simply by thinking. The researcher is watching the computer screen, waiting to answer the robot's questions and give it commands.





While the robot gets everyone's attention, it's the brain interface that is the focus of this research.

Rajesh Rao, University of Washington
Tuning in to the brain and getting a robot to respond adequately to the commands, is the goal of Rajesh Rao. "We're interested in understanding how the brain works," says Rao," and then using that knowledge … to build, for example, prosthetic devices or helper robots."

"The brain uses electrical activity to propagate information," says Rao, "so that particular property lets us record from the brain in various ways." He emphasizes that this way of monitoring the brain is an improvement over past attempts because it tunes in to the brain from the scalp. Previously scientists had to go beneath the skin or even through the skull to get a brain signal of this quality.





They do this by starting with a rubber cap with holes. Explains Rao, "Each hole, there's one wire that's attached and then that wire conducts the electrical signal over to an amplifier and then that amplifies the signal and then it commits that to a computer that then does the rest of the processing." This process is known as an Electroencephalogram or EEG.

He notes the signal is very weak and interference can come from such activity as the wearer clenching his or her jaw. He likens it to standing outside a closed room and trying to figure out who is talking, what they're saying and where they are in that room.




The cap reads volunteer Josh Storz' brain signals and turn his thoughts into robot action.
Since the brain produces many signals all the time, they also had to home in on a particular signal. Rao describes it as, "When you are looking for something, such as, let's say, your keys… and then all of a sudden you see them on a table, then your brain registers a particular kind of response...it's called an 'Ah-hah response.'" The formal name for this brain response is a P3 response.

Before someone can command the robot, the researchers go through a fine-tuning process with each new user. They instruct the user wearing the wired cap to concentrate on just one photograph as a number of them are presented on screen. The electrodes measure brain activity. Once they're satisfied, the user is ready to command the robot. The user will only send commands; the actual control of the robot still belongs to the computer.

In their test, the robot is told to go to a table with two blocks. The robot's camera "eyes" and the computer guide the robot to the blocks. The user is then presented pictures of the blocks, one green; the other red. The user then chooses a block. The robot picks up the designated block and turns around. The user then decides to which of two small tables the robot must take the block. Once the user thinks a command, the robot and computer do the rest.

The robot moves one of two blocks from one table to another. The person decides which block goes where, but the computer and robot work out the individual steps.
The test doesn't work every time. Sometimes the robot stops short of the table. Sometimes the block is at a funny angle, or the table is at an angle not quite right for the robot. Colors similar to the blocks or to the tables can "distract" the robot, sending it off course. These are the kinds of things that a young child, learning his or her way around in the world, masters in just a few short months. But, for the robot and the researchers working with it, these are major engineering hurdles that will take years to solve.

Rao says that in addition to outside influences, during a test session the robot's parts get warm, and they then behave slightly differently. Rao says having the robot notice and begin to adapt to changes like these is one of the many problems they'll have to solve before any of us can expect to have our own robot butler ready to answer our every whim.

This work was presented at the Current Trends in Brain-Computer Interfacing Workshop, December 9, 2006 and supported by grants from the Packard Foundation, the Office of Naval Research and the National Science Foundation.


 
       email to a friend by Jack Penland
               
     


Science Videos     Terms of Use     Privacy Policy     Site Map      Contact      About
 
ScienCentral News is a production of ScienCentral, Inc. in collaboration with The Center for Science and the Media 248 West 35th St., 17th Fl., NY, NY 10001 USA (212) 244-9577. The contents of these WWW sites © ScienCentral, 2000-2013. All rights reserved. This material is based on work supported by the National Science Foundation under Grant No. ESI-0206184. The views expressed in this website are not necessarily those of The National Science Foundation or any of our other sponsors. Image Credits National Science Foundation