Depth, doorknobs and distortions: Uncovering the secrets of 3D vision
Since the ugly duckling first peered into the pond, we have always been fascinated by reflections. Now scientists researching how the brain processes visual information have used ‘specular’ (reflective) objects to gain an insight into 3D vision. The new research (1) by a team from Birmingham, Cambridge and Giessen, which was published online this week in PNAS, reveals how the brain checks the ‘usefulness’ of the signals it receives from the senses, and explains why we sometimes misperceive shapes and distances.
The ability of mirrored objects to trick and tease our perception is both intriguing and uncanny. And no wonder – although we may think that seeing is believing, research has shown that, when looking at a mirrored surface, a shiny doorknob for instance or a chrome bumper, we may misjudge the shape of the object.
When we look at an object we perceive a single image, but our eyes see the object from two different perspectives thanks to their differing horizontal positions on the head (close one eye, then the other, and you’ll see what I mean). Although we are not usually aware of it, having two viewpoints on the world is very useful – it gives us the ability to perceive depth. The brain receives two images of an object which place it in two slightly different locations and it uses this disparity between the two eyes’ views (termed the binocular disparity) to calculate depth – allowing us to accurately judge the position and shape of an object.
Unfortunately, specular surfaces throw a shiny spanner in the works. Ordinarily, although it may appear to shift, the location of an object does not actually move when you look at it through your left eye or your right. Reflections on the other hand, do. The location of a reflection on a curved glossy surface changes according to the observer’s viewpoint, so when the surface is viewed binocularly (with two eyes) each eye sees the reflection in a different location on the surface of the object. In such cases, the binocular disparity is abnormal and no longer an accurate indicator of depth, so we are liable to misjudge the shape of the object and the distance to its surface.
Luckily, in some cases, our brains are able to identify the visual information it receives about these objects as unreliable and avoid the error. The purpose of this study was to establish exactly how the brain identifies this spurious information.
The researchers identified two methods that the visual system might use to identify and overcome unreliable information from specular reflections. One suggestion, which had been made previously (2), is that the brain identifies the surface it is observing as a specular surface, and alters its interpretation of the binocular information accordingly. To identify the specular surface, the brain could use other (nonstereoscopic) information about the object, for instance its colour. Such cues are called ‘ancillary’ markers. The second possibility is that the visual system is able to detect when the binocular disparity signals it receives are substantially abnormal, outside the normal range of magnitude or distribution perhaps, and so rejects that portion of information as untrustworthy. This method relies on ‘intrinsic’ cues, deduced from the nature of the signals themselves, rather than ‘ancillary’ markers.
The researchers wanted to determine the relative influence of these ancillary and intrinsic sources of information on our judgment of shape, with a view to discovering more about the general strategy the brain employs to assess the utility of sensory signals.
To test whether the visual system relies on ancillary markers to identify the object as a specular object, the team developed a method of ‘painting’ artificial reflections onto the surface of an object. They then compared observers’ depth judgments when viewing these painted reflections to their judgments when viewing specular reflections, and when viewing complex shapes (termed ‘potatoes’) compared to simple shapes (termed ‘muffins’).
From this analysis they were able to determine that the brain uses intrinsic cues, rather than ancillary ones, to detect when the visual information it receives is unreliable. The fact that the object is specular does not in itself alert the brain to the unreliability of the information it receives. This explains why, sometimes, we do still make misjudgments about specular objects. If the binocular disparity lies within the normal range of values, the brain will not be alerted to the unreliability of the signal and the resulting estimation of depth may be inaccurate.
This research gives scientists a fresh insight into the generalised way in which the brain analyses incoming sensory information. The findings may provide some useful clues for the design of robotic systems, which often rely almost exclusively on binocular disparity signals to ‘see’ shape. Professor Andrew Blake, one of the research team from Microsoft Research, Cambridge, summed up the study’s achievements, “Understanding human stereo vision is fascinating in its own right and also because of the connections with stereo vision systems used in Robotics today.”
(1) Alexander A. Muryy, Andrew E. Welchman, Andrew Blake, and Roland W. Fleming (2013). Specular reflections and the estimation of shape from binocular disparity. PNAS, published ahead of print January 22, 2013 DOI: 10.1073/pnas.1212417110
(2) Blake A, Bülthoff HH (1990). Does the brain know the physics of specular reflection? Nature 343(6254), 165-168 PMID: 2296307
Fascinated by the brain? Take a look at our new blog, ThInk, which is dedicated to exploring neuroscience in research, medicine, art and every day life.