jump to navigation

Monday Roundup – 27th October 2008 October 27, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
add a comment

ResearchBlogging.org Choosing where to look next

Researchers in Switzerland have some of the features of a scene that might shift our attention,according to a paper in October’s issue of the journal Cortex [1].

The researchers studied healthy volunteers as well as a number of patients that had suffered damage to the right hemisphere of the brain. Of those suffering brain damage, around half suffered from spatial neglect, a condition that leaves a person unaware of objects in one side of the visual field.

The researchers found that the involuntary eye movements (saccades) of healthy and brain damaged volunteers without neglect tended to land on areas of an image that had high contrast and many edges.

The neglect patients on the other hand were much less likely to land on areas with high contrast and many edges in the side of the visual field that they neglected. However, on their ‘good’ side, these patients tended to fixate on areas of the image that were rich in edges.

According to the authors of the study, these results suggest that attention is paid to the whole scene, in order to determine where the eyes will fixate next.

Surprise moves eyes

A paper in an upcoming edition of Vision Research [2] shows that a measure based on the statistical properties of an image can predict where people will direct their attention in a moving image.

Laurent Itti and Pierre Baldi showed short video clips to eight volunteers and measures 10,192 eye movements. They also analysed the videos in order to extract low level features such as contrast, colour and movement in each area of the image.

The researchers found that, using Basyesian statistics they could identify areas of some frames in which these low level features changed in a way that was unexpected given what came before. These areas in the images were most likely to be fixated on by the volunteers when viewing the videos.

The authors said: “We find that surprise explains best where humans look. [Surprise] represents an easily computable shortcut towards events which deserve attention.”

Rivalry rivalry reconciled

Vision researchers often use ‘rivalry stimuli’ in order to study people’s consciousness of a visual stimulus. These types of stimuli have two (or more) interpretations, and viewers’ perceptions will ‘flip’ between one and the other. There are two types of rivalry stimuli: binocular and perceptual and, until recently, it was thought that these acted on conscious perception in different ways.

In the 1960s, Willem Levelt laid down a set of propositions that determine the strength and rate of reversal in binocular rivalry – where each eye receives a different image. Levelt described the ways in which binocular rivalry stimuli could be manipulated in order to change the viewer’s conscious impressions of them.

The upper right and lower left squares can both be seen as the "front" face.

The Necker Cube: The upper right and lower left squares can both be seen as the `front face'.

In perceptual rivalry, the same image is presented to both eyes, but the viewer is still conscious of two or more different interpretations because the image is ambiguous. The most famous example of perceptual rivalry is the Necker cube.

Now, in a paper in PLos One [3], Christiaan Klink and colleagues at the Helmholz Institute in Utrecht have shown that the same rules hold for perceptual rivalry as for binocular rivalry. The authors explain that, although the two types of rivalry stem from very different types of input, “the computaltional principles just prior to the production of visual awareness appear to be common to both types of rivalry.”

[1]R PTAK, L GOLAY, R MURI, A SCHNIDER (2008). Looking left with left neglect: the role of spatial attention when active vision selects local image features for fixation Cortex DOI: 10.1016/j.cortex.2008.10.001

[2] L ITTI, P BALDI (2008). Bayesian surprise attracts human attention Vision Research DOI: 10.1016/j.visres.2008.09.007

[3] P. Christiaan Klink, Raymond van Ee, Richard J. A. van Wezel, David C. Burr (2008). General Validity of Levelt’s Propositions Reveals Common Computational Mechanisms for Visual Rivalry PLoS ONE, 3 (10) DOI: 10.1371/journal.pone.0003473

Monday Roundup: 20th October 2008 October 20, 2008

Posted by David Corney in Uncategorized.
Tags:
add a comment

ResearchBlogging.org Seeing what isn’t there

When we see something, our visual system rapidly recognises its shape, colour, location and so on. These features are then “bound” to the object, so we usually see a “red ball” rather than separate “redness” and “ballness”.

Work by Otto, Ögmen, and Herzog [1] shows that this is not always the case. They use short films of moving lines to show that we can recognise features of one object, but then bind them to another.

The feature studied here is the “slope” of a line, either left, right or vertical. They start by reproducing a previously-known perceptual effect, where a vertical line is rapidly replaced by two flanking lines. When this happens, the original vertical line is rendered invisible, but they go on to show that certain features of the invisible line may linger and be “incorrectly” bound to other lines. Essentially, a vertical line appears to be sloping, if it is grouped with a previous line that was sloping.
What’s more, this happens even when the original sloping line is itself invisible.

They authors say: “how features are attributed to objects is one of the most puzzling issues in the neurosciences”. This paper doesn’t exactly solve the puzzle, but perhaps tells us which way up some of the pieces go…

Depth perception with binocular disparity

There are many depth cues that our visual system uses, such as object size and near objects obscuring more distant objects. Binocular disparity refers to the projection of each point in space onto separate points in our two retinas.

If we keep our eyes fixed on some point, then other points in space around that will project onto each retina. At different distances, objects will be projected onto different parts of each retina.

This depth cue, stereopsis, has usually been thought to only apply to very nearby objects, within a few meters, along with cues like retinal convergence (the eyes cross slightly when looking at very close objects) and accommodation (the lens changes shape, again for very close focusing).

A new paper by Liu, Bovik and Cormack [2] re-examines this. They followed volunteers walking through a forest, and randomly asked them to stop and say what they were looking at. They then used a laser scanning system which measured the distance to that point and to other points nearby. By knowing where someone was fixating, and the distances to every point they could see, the visual disparities of the scene could then be calculated. The results show that stereopsis should work even at quite large distances, because the divergence produced in natural scenes is typically larger than the minimal perceptual threshold.

They also compared these findings with known neuroscience data, and found a match with the MT area in macaques, suggesting that our brains are tuned to the image statistics of disparity. If we are so good at stereopsis, and the visual data is abundant, presumably we use it even for more distant scenes.

In a further study, they showed that the disparity statistics are similar for indoor scenes, as for the forest scenes. But of course, we didn’t evolve indoors, which makes me wonder if we tend to prefer interiors that somehow match the image statistics that we have evolved in response to? Is there a neuroscience of feng shui?!

[1] Otto, T., Ögmen, H., & Herzog, M. H (2006). The flight path of the phoenix—The visible trace of invisible elements in human vision Journal of Vision, 6 (10), 1079-1086 DOI: 10.1167/6.10.7

[2] Liu, Y., Bovik, A. C., & Cormack, L. K. (2008). Disparity statistics in natural scenes Journal of Vision, 8 (11), 1-14 DOI: 10.1167/8.11.19

Monday Roundup: 13th October 2008 October 13, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
add a comment

ResearchBlogging.org Colour blind monkeys are fine at foraging

Fruit foraging may not be the “killer app” for colour vision, according to research published in this month’s issue of PLOSOne [1].

Researchers from Japan, Canada, New Zealand and the UK studied wild black-handed spider monkeys in Santa Rosa National Park, Costa Rica. They used genetic screening to identify nine individuals with full colour vision, and 12 that were red-green colour blind.

Spider monkeys spend 80-90% of their foraging time feeding on fruit. The researchers found that the red-green colour blind monkeys were just as efficient at finding edible fruit as their full-colour vision counterparts.

The researchers measured the colour profile of the fruits and the background foliage and found that the difference in luminance between the two was large enough to show when ripe fruit was present. “The advantage of red-green color vision in primates may not be as salient as previously thought and needs to be evaluated in further field observations,” said the authors.

Technicolour dreams

In the early 20th century results from surveys of dream imagery showed that very few people reported dreaming in colour. However results that date from before the 20th century, and from the 1960s onwards, show the opposite: that colour dreaming is common [2].

In a landmark study in 1942 by Warren Middleton, 71% of college sophomores said that they rarely or never dreamt in colour. When Eric Schwitzgebel replicated the study in 2003 he found that only 17% of students claimed to rarely or never dream in colour.

Eva Murzin from the University of Dundee, Scotland, collected dream data from volunteers that either grew up with predominantly black and media or that did not. Participants were asked to keep a diary of their dreams, or to fill in a questionnaire at a later date.

The results, to be published in the journal Consciousness and Cognition, showed that people over 55 reported a higher proportion of dreams in black and white than those under 55.

The study also showed that, whilst all participants could remember coloured imagery better than back and white imagery, the older participants were better able to recall the details of dreams in black and white than the younger volunteers.

Murzin said: “This result can be interpreted in two ways: either people who claim to have greyscale dreams but have not had experience with such media are simply mislabelling poorly recalled colour dreams or people with early black and white media access misremember the presence of colour in their dreams more easily than people without such experience. This second option could be linked to different expectations and beliefs about dreaming.”

Older brains work differently

Learning to solve visual reasoning tasks like Raven’s progressive matrices is not just a matter of honing existing activity. The parts of the brain used to solve these problems change as we get older [3].

Raven's Progressive Matrix Example
An example problem similar to Raven’s Progressive Matrices, from Wikimedia Commons.

In research to be published in an unpcoming issue of Brain and Cognition, scientists studied the brain activity of 8-19 year-olds solving these types of puzzles. Although children of all ages did equally well on the tests fMRI cans showed that younger children tended to have higher activity in areas of the brain associated with slow, effortful thought. Older participents showed activation consistent with faster, more efficient reasoning.

The study suggests that faster visuospatial reasoning is not the result of brain areas learning to operate more quickly and efficiently. Rather, different brain areas take over as children get better at a task.

[1] Chihiro Hiramatsu, Amanda D. Melin, Filippo Aureli, Colleen M. Schaffner, Misha Vorobyev, Yoshifumi Matsumoto, Shoji Kawamura, Sean Rands (2008). Importance of Achromatic Contrast in Short-Range Fruit Foraging of Primates PLoS ONE, 3 (10) DOI: 10.1371/journal.pone.0003356

[2]E MURZYN (2008). Do we only dream in colour? A comparison of reported dream colour in younger and older adults with different experiences of black and white media Consciousness and Cognition DOI: 10.1016/j.concog.2008.09.002

[3] P ESLINGER, C BLAIR, J WANG, B LIPOVSKY, J REALMUTO, D BAKER, S THORNE, D GAMSON, E ZIMMERMAN, L ROHRER (2008). Developmental shifts in fMRI activations during visuospatial relational reasoning Brain and Cognition DOI: 10.1016/j.bandc.2008.04.010

Monday Roundup – 6th October 2008 October 6, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
2 comments

ResearchBlogging.org

Does your seven month old enjoy snooker?
Infants see causal relationships between events, according to research published in the journal Cognitive Psychology[1].

The authors explain: “When we see a collision between two billiard balls, for example, we do not simply see the cessation of one ball’s motion followed by the onset of motion in the other: instead, we see one ball cause the other’s motion”.

It is not known whether the perception of causal movement is learnt during a person’s lifetime, or whether it has been shaped by evolution. To test the effect it is present in small children George E. Nemwan and colleagues at Yale University studied 7 month-old babies and their perception of causal movement.

The researchers showed the babies short videos of a disk that either to “cause” a pair of disks to move until they became habituated to this type of movement.

The babies were then shown videos of disks that behaved causally, like billiard ball collisions, until they lost interest. They were then shown more videos of “collisions” and videos in which the causal relationship was “broken” by moving one of the disks early or late.

The babies could tell the difference between the two types of events, showing significantly more interest in the “non-causal” videos.

Emotional link to visual awareness

Researchers at the University of Houston, Texas have used binocular rivalry to study the effect of emotion on vision [2].

Twelve subjects were shown pairs of images – one to each eye. The participants’ saw one or the other of the images at any one time, and their perception switched between the two.

The researchers used images from the International Affective Picture System, a library of pictures that have been rated by valence (from “pleasant” to “unpleasant”) and arousal, or the strength of the effect on the viewer’s emotions. A bunch of flowers may is a high valence, low arousal picture, whilst a picture of a badly injured person would have low valence and high arousal.

The study, published in volume 48 of Vision Research, found that the volunteers saw the “nice” pictures earlier and for longer than the “nasty” pictures, as long as the emotional intensity was low. Where the pictures had a stronger emotional effect, the participants saw the unpleasant picture earlier and for longer.

“Our study teases apart the relative effects of the arousal versus valence levels of a stimulus on one’s awareness of it,” Said the authors. “The consequences of not processing a noxious stimulus are direr than the consequences of not processing a pleasing one… On the other hand, if the stimuli are not arousing and therefore, not critical to one’s fitness, the more pleasant stimulus is obviously more pleasurable.”

[1] G NEWMAN, H CHOI, K WYNN, B SCHOLL (2008). The origins of causal perception: Evidence from postdictive processing in infancy☆ Cognitive Psychology, 57 (3), 262-291 DOI: 10.1016/j.cogpsych.2008.02.003

[2] B SHETH, T PHAM (2008). How emotional arousal and valence influence access to awareness Vision Research, 48 (23-24), 2415-2424 DOI: 10.1016/j.visres.2008.07.013

Monday Round-Up September 29, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
add a comment

ResearchBlogging.org

Shape fast – colour slow: spotting changes in rival images

A study published in October’s Vision Research gives more detail about the way colour and orientation information are handled in the visual system [1].

Volunteers were presented with two different images – one to each eye. This results in binocular rivalry: only one of the images is seen at a time and perception flips between the two.

Sandra Veser from Leipzig university, together with colleagues from New Zealand and Cuba looked at the time it took 17 volunteers to spot a change in one of the images. When a change was made to the image that was currently being perceived, the difference was noticed straight away. Otherwise the switch was not seen.

The research team measured the changes in the volunteers’ brain activity (specifically, the event related potentials) when either the orientation or the colour of the bars changed. The response to a change in orientation happened about twice as fast (0.1 second) as the response to a change in colour (0.2sec).

Gene therapy to reduce retinal cell death

Gene therapy may one day help save the sight of patients with detached retinas, according to a study by Mong-Ping Shyong and colleagues in Taiwan [2].

Patients with retinal detachment may still lose their sight, even if the retina is reattached surgically. Previous research suggested that programmed cell death (apoptosis) may be responsible for this loss of vision.

The researchers tested a virus that was modified to express an the enzyme HO-1. This enzyme is known to reduce the rate of apoptosis. They injected the genetically modified virus beneath the retinas of rats with experimentally detached retinas. Compared to rats that had another virus injected into the retina, or that received no treatment at all, the rats treated with the HO-1 producing virus had more photoreceptors and a thicker outer layer of the retina 28 days after treatment.

The researchers suggest that gene therapy may one day lead to better recovery for patients after surgical reattachment of the retina.

Sound strengthens seeing

In a paper in October’s Acta Psychologica, Aleksander Väljamäe and Salvador Soto-Faraco report an experiment that shows that sound strengthens the visual perception of movement [3].

That sound and vision act together to give clues about motion has been known for a long time. The authors give the example of the sliding doors in the film The Empire Strikes Back, which were created using two stills (one of the door open, one of it closed) and a sound effect.

Väljamäe and Soto-Faraco used the motion after effect (similar to the waterfall illusion, where watching water cascade downwards for some time makes the rocks appear to move uphill) to study whether sound reinforces the perception of movement.

Volunteers watched short videos of several flashes in succession. Some of these flashes were made progressively bigger, some were made smaller, so that it looked like the light was approaching or receding. The researchers then measured the motion after effect to see how “strong” the perception of motion had been.

Some flashes were so far apart that they did not give the impression of movement by themselves. ut when they were accompanied by sounds that also seemed to be approaching or receding, the participants experienced motion after effect.

The researchers suggest that fewer frames per second might be needed in videos, as long as the sound effects are closely matched to the images. This could lead to much higher rates of data compression.

[1] S VESER, R OSHEA, E SCHROGER, N TRUJILLOBARRETO, U ROEBER (2008). Early correlates of visual awareness following orientation and colour rivalry Vision Research, 48 (22), 2359-2369 DOI: 10.1016/j.visres.2008.07.024

[2] M SHYONG, F LEE, W HEN, P KUO, A WU, H CHENG, S CHEN, T TUNG, Y TSAO (2008). Viral delivery of heme oxygenase-1 attenuates photoreceptor apoptosis in an experimental model of retinal detachment Vision Research, 48 (22), 2394-2402 DOI: 10.1016/j.visres.2008.07.017

[3]A VALJAMAE, S SOTOFARACO (2008). Filling-in visual motion with sounds Acta Psychologica, 129 (2), 249-254 DOI: 10.1016/j.actpsy.2008.08.004