ScienCentral News
environment general science genetics health and medicine space technology April 29, 2003 
home NOVA News Minutes archive login

is a production of
ScienCentral, Inc.
Making Sense of Science

Also of Interest
Protein Machine (video)

Smart Robots (video)

Instant Waterproofing (video)

Secret Sensor (video)

Rainbow X-Ray Vision (video)

Driving Blind (video)

Smart Ink (video)

Strong Stuff (video)

M.D. on a Chip (video)

Bio Detector (video)

Watching Living Brains (video)

Nano Designer (video)

Satellite Steering (video)

Virtually Vulnerable (video)

Lift to the Heavens (video)

NOVA News Minutes
Visit the NOVA News Minutes archive.
ScienCentral News and Nature
Nature genome promo logo
Don’t miss Enter the Genome
our collaboration with Nature.
Best of the Web!
Popular Science Best of the Web 2000
Selected one of Popular Science’s 50 Best of the Web.
Get Email Updates
Write to us and we will send you an email when a new feature appears on the site.
Virtual Reality and Vision (video)
May 31, 2001

Also on ScienCentral News

What Are You Looking At? (video) - What are you looking at? It isn’t just advertisers who want to know. This co-production of ScienCentral News and Popular Science reports ... (2/6/01)

Read My Eyes - Scientists using new technology to study infants’ eye movements say they can reveal what babies know and how they learn. (2/6/01)

Science, Sealed and Delivered - NASA engineers are planning a daring mission to bring back pieces of the red planet. (2/4/99)

Look Ma, No Mouse - How much more productive would you be if you could have meaningful conversations with your computer? (8/8/00)

Elsewhere on the web

History of Virtual Reality

The Joy of Visual Perception: A Web Book

Open Virtual Reality Testbed at NIST

Biorobotic Vision Laboratory at the Australian National University

University of Rochester Center for Visual Science

Its a problem most of us never think about, but scientists have long wanted to know how we manage to walk around without running into things.

As this ScienCentral News video report shows, researchers at Brown University used virtual reality helmets to explore how our eyes and mind work together to keep us on target.

Virtual Reality and Vision

Comparing a walk through Grand Central Station at rush hour to playing a video game is not a difficult leap to make. Doing both, says Bill Warren, Professor of Cognitive Sciences at Brown University, "You have to navigate your way through a very complicated world, with objects flying around and obstacles you have to avoid and pitfalls you can fall into... all using visual information." Yep, definitely Grand Central. He might add, "and you might not survive."

Which is probably why, when virtual reality was introduced in the late 1980’s, it quickly became the tool of choice for studying how we humans so naturally use our vision to control our locomotion, an amazing feat that’s added years to our survival. Understanding human motion perception and navigation will be key to building better robots that can do it on Mars, as well as aiding humans with certain visual impairments.

Recently, researchers in this field have used VR to study how we catch fly balls and how to make driving safer with intelligent rear-view mirrors.

Warren, at the world’s largest VR facility, the Virtual Environment Navigation Laboratory, or VENLab, wants to know how we walk around without bumping into things. What sort of visual information is most important?

That depends on what strategy we use for getting across that busy terminal. There are two main contenders. When you aim at a target and move toward it, you are using what is termed visual direction. When you navigate using the direction of motion of objects you are passing, that is called optic flow.

But the two strategies can’t be separated out and controlled in the real world. So, using virtual reality, "We pitted the two against each other," Warren says. In the 40-square-foot VENLab, people wearing VR headsets can walk around while immersed in a simulation. Sensors in the hall’s ceiling tracked their paths as they tried to aim toward a target while researchers controlled how many passing objects they encountered.

It turned out that the more optic flow information was added, the more important it became. "When all people could see was the target, they walked in the direction of the target," he says. "But as we added more motion into the display and the world got more complicated... then the optic flow dominated."

"In other words," Warren says, "people are doing both."

And "it’s a good thing," he adds. Using two strategies makes us "more robust in different environments." Robots on Mars should be so robust .

The research was reported in the February 2001 issue of the journal Nature Neuroscience.

by Joyce Gramza

About Search Login Help Webmaster
ScienCentral News is a production of ScienCentral, Inc.
in collaboration with the Center for Science and the Media.
248 West 35th St., 17th Fl., NY, NY 10001 USA (212) 244-9577.
The contents of these WWW sites © ScienCentral, 2000-2003. All rights reserved.
The views expressed in this website are not necessarily those of the NSF.
NOVA News Minutes and NOVA are registered trademarks of WGBH Educational Foundation and are being used under license.