ScienCentral News
 
environment general science genetics health and medicine space technology May 18, 2003 
home NOVA News Minutes archive login

is a production of
ScienCentral, Inc.
Making Sense of Science

Also of Interest
Mighty Mini Motor (video)

Smart Robots (video)

Protein Machine (video)

Instant Waterproofing (video)

Secret Sensor (video)

Rainbow X-Ray Vision (video)

Driving Blind (video)

Smart Ink (video)

Strong Stuff (video)

M.D. on a Chip (video)

Bio Detector (video)

Watching Living Brains (video)

Nano Designer (video)

Satellite Steering (video)

Virtually Vulnerable (video)

NOVA News Minutes
Visit the NOVA News Minutes archive.
ScienCentral News and Nature
Nature genome promo logo
Don’t miss Enter the Genome
our collaboration with Nature.
Best of the Web!
Popular Science Best of the Web 2000
Selected one of Popular Science’s 50 Best of the Web.
Get Email Updates
Write to us and we will send you an email when a new feature appears on the site.
Look Ma, No Mouse
August 08, 2000

If you’re getting sick of point-and-click and tired of typing away at the computer, there is hope. While the keyboard and mouse practically define the computer of today they won’t, it seems, be around forever.

Researchers at the Center for Advanced Information Processing (CAIP) at Rutgers University in New Jersey are developing a system of devices that is designed to help get information into and out of computers more easily and more naturally than will ever be possible with the keyboard and mouse.

A computer with three senses

The system, called the Speech, Text, Image, and Multimedia Advanced Technology Effort—STIMULATE for short—is aiming to bring the computers of tomorrow to their senses. "The mouse and keyboard make us interface in a very regimented way with the computer, and it isn’t a natural way like the way we interface with other human beings," says Ed DeVinney, senior associate director of the CAIP. "We as persons speak to each other with sight, sound and touch, and we are emulating that in the human computer interface." The National Science Foundation is helping fund the research.

These two crosshairs are used by the gaze-tracking mechanism to determine where on the screen a user is looking.

The system relies on an array of input devices called a Multi-modal Input Manager (MIM). A "force-feedback" glove allows the user to point to and manipulate objects on a computer screen. "Gaze-tracking" infrared beams located near the monitor track the position of the user’s eyes so that he or she can manipulate a cursor on the screen while doing something else with their hands. The MIM system also responds to voice commands through a microphone (it even talks back to you), and can distinguish between the user’s voice and other sounds in a noisy environment.

Special software interprets the information coming into the computer from its three senses and combines it. This, says DeVinney, is one of the most important breakthroughs of the technology. "One of the key items in developing this system was to develop a manager that could take these three strains of modalities and do something sensible—to implement the intention of the user," he says.

A touchy-feely computer

After a 10-minute calibration session, the user can select objects on the screen just by looking at them and then tell the computer what to do using voice commands. This multi-modal interface should allow a user to perform unconventional tasks on the computer. One potential application DeVinney sees for these kinds of sensors is in telemedicine. "A doctor might actually have the patient on an operating table, have his hands busy, but be able to look at the screen, and by looking at the screen indicate a place on the patient’s body where he has some interest and say for example ‘diagnose’," he says.

Ankle therapy disguised as a video game.

It can also be used for something like physical therapy. One example already in use is a flight simulator game that is controlled by an apparatus around the user’s ankle. The apparatus gets feedback from the game, and applies different levels of force that the user has to counteract while navigating the airplane.

If the task at hand requires a more hands-on approach, the force feedback glove can be used to manipulate objects on the screen. It understands hand gestures by sensing the positions of the user’s fingertips relative to their palm. The glove not only allows the user to point to objects and grab hold of them to move them around, but also returns information from the computer, giving the glove wearer a virtual sense of what each object feels like.

By developing a more natural way of communicating with computers, the researchers hope the end result will be better communication between people. "The kinds of applications we’re looking at tend to be collaborative applications where multiple people—at maybe different sites around the world, on their computers and on the internet—are trying to work together to get a job done," says DeVinney.

When will it be ready?

According to the Rutgers team, the STIMULATE system is still in the prototype stage, but it has been tested in a real-life situation with the New Jersey National Guard. The military, which is also helping to fund the research along with the NSF, hopes that it can improve communication between officers at a command post and their subordinates in the field, particularly to manage complex logistical problems faced by the armed forces. The Army National Guard officers at Fort Dix in New Jersey used the system in a disaster-relief simulation to manipulate structures, personnel and equipment on two and three-dimensional maps. These kinds of data are hard to manipulate using a keyboard and mouse, say the researchers, meaning that grease pencils and acetate overlays are still the military’s top choice for dealing with logistics data.

Aside from the more immediate military and medical applications, multi-modal approaches to communicating with computers could help those with special needs. It is also hoped that by lowering the barrier between the machine and its user, the training time employees need in order to learn how to use a new system will be reduced as well.

"Computers have become much more productive because they’re easier to use and more intuitive and they’ve reached more people," says DeVinney. "The goal is to extend this still further, so that people can walk up to a machine and in short order figure out what it can do for them."

Elsewhere on the web:

Human-Computer Interaction Gets a Helping Hand, Eye and Voice - from the National Science Foundation

Carnegie Melon University’s Interactive Systems Lab

Oregon Graduate Institute’s Center for Human-Computer Communication

University of Maryland Human Computer Interaction Lab

Microsoft’s Research Division

Human-Computer Interaction Lab at Virginia Tech

Journal of Human-Computer Interaction



by Tom Clarke


About Search Login Help Webmaster
ScienCentral News is a production of ScienCentral, Inc.
in collaboration with the Center for Science and the Media.
248 West 35th St., 17th Fl., NY, NY 10001 USA (212) 244-9577.
The contents of these WWW sites © ScienCentral, 2000-2003. All rights reserved.
The views expressed in this website are not necessarily those of the NSF.
NOVA News Minutes and NOVA are registered trademarks of WGBH Educational Foundation and are being used under license.