Monday Roundup: 20th October 2008 October 20, 2008Posted by David Corney in Uncategorized.
Tags: Monday Roundup
When we see something, our visual system rapidly recognises its shape, colour, location and so on. These features are then “bound” to the object, so we usually see a “red ball” rather than separate “redness” and “ballness”.
Work by Otto, Ögmen, and Herzog  shows that this is not always the case. They use short films of moving lines to show that we can recognise features of one object, but then bind them to another.
The feature studied here is the “slope” of a line, either left, right or vertical. They start by reproducing a previously-known perceptual effect, where a vertical line is rapidly replaced by two flanking lines. When this happens, the original vertical line is rendered invisible, but they go on to show that certain features of the invisible line may linger and be “incorrectly” bound to other lines. Essentially, a vertical line appears to be sloping, if it is grouped with a previous line that was sloping.
What’s more, this happens even when the original sloping line is itself invisible.
They authors say: “how features are attributed to objects is one of the most puzzling issues in the neurosciences”. This paper doesn’t exactly solve the puzzle, but perhaps tells us which way up some of the pieces go…
Depth perception with binocular disparity
There are many depth cues that our visual system uses, such as object size and near objects obscuring more distant objects. Binocular disparity refers to the projection of each point in space onto separate points in our two retinas.
If we keep our eyes fixed on some point, then other points in space around that will project onto each retina. At different distances, objects will be projected onto different parts of each retina.
This depth cue, stereopsis, has usually been thought to only apply to very nearby objects, within a few meters, along with cues like retinal convergence (the eyes cross slightly when looking at very close objects) and accommodation (the lens changes shape, again for very close focusing).
A new paper by Liu, Bovik and Cormack  re-examines this. They followed volunteers walking through a forest, and randomly asked them to stop and say what they were looking at. They then used a laser scanning system which measured the distance to that point and to other points nearby. By knowing where someone was fixating, and the distances to every point they could see, the visual disparities of the scene could then be calculated. The results show that stereopsis should work even at quite large distances, because the divergence produced in natural scenes is typically larger than the minimal perceptual threshold.
They also compared these findings with known neuroscience data, and found a match with the MT area in macaques, suggesting that our brains are tuned to the image statistics of disparity. If we are so good at stereopsis, and the visual data is abundant, presumably we use it even for more distant scenes.
In a further study, they showed that the disparity statistics are similar for indoor scenes, as for the forest scenes. But of course, we didn’t evolve indoors, which makes me wonder if we tend to prefer interiors that somehow match the image statistics that we have evolved in response to? Is there a neuroscience of feng shui?!
 Otto, T., Ögmen, H., & Herzog, M. H (2006). The flight path of the phoenix—The visible trace of invisible elements in human vision Journal of Vision, 6 (10), 1079-1086 DOI: 10.1167/6.10.7
 Liu, Y., Bovik, A. C., & Cormack, L. K. (2008). Disparity statistics in natural scenes Journal of Vision, 8 (11), 1-14 DOI: 10.1167/8.11.19