jump to navigation

Fast at the peripheries: deaf people have equally fast responses throughout the visual field. December 13, 2011

Posted by Emma Byrne in Uncategorized.
Tags: , , ,
2 comments

ResearchBlogging.org

File under: “it turns out it’s more complicated than that.”  Many deaf people have better-than-average vision and many studies have shown that the parts of the brain that would be used for hearing are helping out with vision instead. New work from a joint team from universities in Italy and France demonstrates that things may not be as straightforward as we might think. Deaf people may have much better peripheral vision than their hearing counterparts.

In a elegantly simple experiment the team took ten people who had been profoundly deaf since infancy, and ten hearing matched controls, and asked them to carry out a simple target-spotting task (See Figure 1). The participants pressed a key when they saw a target appear in one of eight locations: four close to the centre of the participants’ visual field and four further out in the periphery.

Experimental protocol and behavioural results.

Experimental protocol and behavioural results from Bottari et al (2011).Not only did the participants who had been deaf since childhood respond faster, they also weren't as affected by the position of the stimulus as the participants with unimpeded hearing.The dotted circles in Part a show the eight target locations. The "C" and "Reverse C" shapes are the targets that the participants had to identify.

The researchers discovered that the participants who had been deaf since childhood spotted and responded to the targets much faster than hearing individuals. However, the most interesting behavioural results showed that there is something very different in the makeup of the visual fields of the deaf and hearing participants. The hearing participants’ reaction time slowed down when the targets were on the edges of the visual field but the deaf participant’s reactions stayed equally fast.  This suggests that the deaf people are able to process detailed information from all across the visual field equally effectively, and aren’t as hampered by peripheral vision as their hearing counterparts.

Bottari, D., Caclin, A., Giard, M., & Pavani, F. (2011). Changes in Early Cortical Visual Processing Predict Enhanced Reactivity in Deaf Individuals PLoS ONE, 6 (9) DOI: 10.1371/journal.pone.0025607

Advertisements

What colour chimeras tell us about vision. July 28, 2008

Posted by Emma Byrne in Uncategorized.
Tags: , ,
1 comment so far

ResearchBlogging.org
I managed two impossible things before lunch today[1]. I crossed the road this morning, confident I could avoid the big red bus that I saw in my peripheral vision. I read the links in small-type in the Word Press sidebar, despite the fact that they are in blue text. However, if my retina is to be believed, this shouldn’t be possible.

Colour photo-receptors (cones) are not distributed evenly across the retina. In the fovea, “red” and “green” cones are densely packed, and there are no “blue” cones and no rods. So detailed vision takes place with no (direct) knowledge of short wave light.

However, only 5° of visual angle from the fovea the density of the cones drops dramatically (from around 150,000/mm² at the central fovea to <10,000 per mm²) and the density of the rods rises from none at the fovea to >150,000/mm° at around 15° from the centre of the fovea.

5° of visual angle is about two thumbs’ width when held at arms’ length. Hold up your hands with your thumbs side by side(at arms’ length). Keep your hands forward, so they look something like the “bird” hand shadow. Now fixate on your thumbs. Everything around your thumbs is seen by this area of the retina that is much richer in rods than in cones, whereas the thumbs themselves are seen with the part of the retina that is pretty much all cones. Yet the skin on the back of your hand looks just as colourful as the skin on your thumbs.

So why are colours in the periphery and colours in the centre of the visual field perceived so similarly? The short answer is “because it’s useful”. If the things you saw kept changing colour as they moved across your visual field that would make object identification very difficult indeed. But this doesn’t tell us how the visual system “reconstructs” colour. Is the colour “spread” through the visual field by taking the statistics of the centre and applying them to the surround? Or is there are “top-down” effect, such that knowledge of what is in the visual field tells us how it should look.

Balas and Sinha devised a neat experiment to discriminate between these hypotheses. They decided to address the following questions:

  1. “Do observers complete the colour content of natural scenes when larger regions of the image have had colour artificially removed?”
  2. “If colour completion occurs, does it do so more readily from the centre of an image outwards as opposed from the periphery inwards?”
  3. “If colour completion occurs, does it depend on natural scene statistics?”

To answer the first and second questions they created “Colour Chimeras” – images that were desaturated (“greyed out”) either in the centre or at the edges. Volunteers were presented with images that were entirely grey, entirely coloured (“pan-field” coloured), colour centre chimeras or grey centre chimeras. They found that subjects were much more likely to mistake chimeric images for “pan field” colour images than they were to mistake them for grey images. Importantly, if didn’t matter whether the chimera was greyed out at the centre or the edge: the volunteers still saw a significant proportion of the images as pan-field coloured.

To answer the third question the researchers altered the textural and the colour information in the images. In the first experiment the chimeric and non-chimeric images were all natural scenes (beaches, trees etc.) In the second experiment, some volunteers were presented with natural scenes. Some volunteers were presented with scenes in which the colour had been altered to consist of a single hue, so that the colours were not natural, but shapes within the image were still recognisable. Others were presented with scenes in which the textures were changed, so that the structure of objects was no longer recognisable, but the distribution of colours was the same as the original image. Some subjects were presented with images that had both colout and texture manipulations, in which the original objects and colours could no longer be recognised. How would this affect the volunteers’ ability to spot chimeras?

Volunteers were less likely to “fill in” colour when they were presented with the manipulated chimeras instead of the natural scenes. Textural changes reduced the ability to “spread” colour to the rest of the scene, and colour manipulations reduced this ability even more. However when images were manipulated both for colour and texture, subjects were very good at spotting chimeras (or very bad at filling in colour).

The authors conclude that colour spreading is a common perceptual phenomenon (much more common than the occurrence of “grey spreading” – the mis-identification of chimeras as grey images). Furthermore, they conclude that scene statistics provide important perceptual cues that support this colour spreading. So the next time you see a bus in your peripheral fieldand you know that it’s red, it’s probably because you’ve seen red buses before, and not because your retina tells you so.

[1] I know it should be six before breakfast, but it’s very hot today. Vaughan at Mind Hacks is obviously made of sterner stuff.

Balas, B., Sinha, P. (2007). “Filling-in” colour in natural scenes. Visual Cognition, 15 (7), 765-778. DOI: 10.1080/13506280701295453

Are we nearly there yet? Does anticipated effort change perception of distance. April 8, 2008

Posted by Emma Byrne in Uncategorized.
Tags: , , ,
2 comments

ResearchBlogging.org This Sunday, I’ll be taking part in a little stroll around London.

I’ve been training for this for a few months now, and I’ve noticed something in my longer runs. I become very bad at estimating how long it will take to pass a given point. Like many long distance runners, I tend to resort to distraction techniques as the fatigue kicks in. I pick a paving slab or pavement and “bet” myself how long it will take to pass that point. I find that I consistently overestimate how long it will take to reach that point – a lamppost that I think is 10 seconds away tends to be reached in seven. This effect gets more pronounced the longer I run. I’ve thought of several reasons why this may be:

  1. I consistently round up times to the nearest block of five seconds.
  2. I count seconds too slowly.
  3. I’m bad at estimating the distance to a landmark, especially when tired.

Hypothesis three seemed plausible, especially when I remembered reading a nice study by Proffitt et al (2003) that demonstrated (inter allia) a significant increase in perceived distance when laden with a backpack than when not. Participants (n=24) were asked to give a visual reckoning of distance from themselves to a cone in a flat, grassy field. Half of the participants were unladen for the duration of the task; the other half wore a backpack throughout. Proffitt et al found that all participants underestimated distance, but the laden participants gave significantly higher estimates of distance than the unladen ones.

Perception of distance is usually thought of as being due to a combination of visual cues, such as occlusion, perspective, relative size and height, parallax, texture and brightness. However, there is a model that suggests the perception of egocentric distance is also functional.That is, what we really estimate is some function of the effort it will take to travel from “here” to “there”. Poffitt et al summarise this nicely:

“Berkeley concluded that perception of distance must be augmented by sensations that arise from
eye convergence and from touch. For egocentric distances, tangible inforformation arises from the effort required to walk a distance, and thus, effort becomes associated through experience with visual distance cues…”

They contrast the “functional” model of distance perception with the “geometric” model of visual perception:

“[In] complex, natural environments viewed with both eyes by moving observers, there is sufficient information… to specify egocentric distance. Thus, a role for effort in perceiving distance seems unnecessary if the goal of perception is to achieve a geometrically accurate representation”.

Which, of course, it isn’t. The “goal” of perception (or at least, the reward associated with perception) is the ability to interact effectively with the environment. The backpack result presented by Proffitt et al supports the argument for a functional model of distance perception: what is estimated is not distance directly, but the effort required to cover that distance.

So far so plausible. But results in Hutchinson and Loomis, (2006a) and Woods and Philbeck, (2006) appear to cast doubt on the reproducibility of the backpack effect.

Hutchinson and Loomis made two departures from the original method. The first difference was the use of a within-participants setup (each participant made estimates in the laden and unladen condition). The second difference was that participants were told that they would have to estimate the size of the target, rather than walk to it (Proffitt et al, 2006). Hutchinson and Loomis (2006b) state that they failed to find any effect in the between-participant comparisons, and that the original Proffitt et al study did not prompt the participants to anticipate walking to the target.

Woods and Philbeck (2007) also fail to find an effect. Their method also differs from the original, in that they asked some participants to rate their (anticipated) effort on the Borg CR10 scale. However, levels of anticipated effort were also not significantly different between laden and unladen conditions. Which, to me, raises the question of whether or not the backpacks were sufficiently heavy. There certainly doesn’t seem to be anything in that result that invalidates the functional model of distance perception: no difference in effort = no difference in perceived distance.

What can be made of these differences? There are several other studies, many by Proffitt and Bhalla that support the functional model of distance (and slope) estimation. A functional model of distance perception is prima facie plausible – experiential learning is more likely to build a model of distance based on exertion (which is directly accessible to the organism) than a geometric measure of distance (not directly accessible, despite the invention of the ruler). However, the backpack results at least have not gone unquestioned.

So where does this leave me on Sunday? I haven’t had time to carry out a complete literature review of the effects of glucose depletion/aerobic exercise/fatigue/endorphin release on perception- but if someone wants to sponsor me to run next year’s marathon, I’ll happily participate in psychophysics experiments en route. And to be on the safe side, I’ll be running this year’s Marathon carrying as little as possible.

Hutchison, J. J., Loomis, J. M. (2006a). Does energy expenditure affect the perception of egocentric distance? A failure to replicate Experiment 1 of Proffitt, Stefanucci, and Epstein (2003) [Abstract]. Journal of Vision, 6(6):859, 859a, doi:10.1167/6.6.859.

Jeffrey J. Hutchison, Jack M. Loomis (2006b), Reply to Proffitt, Stefanucci, Banton, and Epstein, The Spanish Journal of Psychology, Vol. 9, No. 2, 343-345, [PDF]

Proffitt, D.R., Stefanucci, J., Banton, T., Epstein, W. (2003). The role of effort in perceiving distance. Psychological Science, 14(2), 106-112. DOI: 10.1111/1467-9280.t01-1-01427

Dennis R. Proffitt, Jeanine Stefanucci, Tom Banton, and William Epstein (2006) A Final Reply to Hutchison and Loomis, The Spanish Journal of Psychology, Vol. 9, No. 2, 346-348, [PDF]

Woods, A. J., & Philbeck, J. (2007). Does perceived effort influence verbal reports of distance? [Abstract]. Journal of Vision, 7(9):421, 421a, doi:10.1167/7.9.421.