jump to navigation

Fast at the peripheries: deaf people have equally fast responses throughout the visual field. December 13, 2011

Posted by Emma Byrne in Uncategorized.
Tags: , , ,
2 comments

ResearchBlogging.org

File under: “it turns out it’s more complicated than that.”  Many deaf people have better-than-average vision and many studies have shown that the parts of the brain that would be used for hearing are helping out with vision instead. New work from a joint team from universities in Italy and France demonstrates that things may not be as straightforward as we might think. Deaf people may have much better peripheral vision than their hearing counterparts.

In a elegantly simple experiment the team took ten people who had been profoundly deaf since infancy, and ten hearing matched controls, and asked them to carry out a simple target-spotting task (See Figure 1). The participants pressed a key when they saw a target appear in one of eight locations: four close to the centre of the participants’ visual field and four further out in the periphery.

Experimental protocol and behavioural results.

Experimental protocol and behavioural results from Bottari et al (2011).Not only did the participants who had been deaf since childhood respond faster, they also weren't as affected by the position of the stimulus as the participants with unimpeded hearing.The dotted circles in Part a show the eight target locations. The "C" and "Reverse C" shapes are the targets that the participants had to identify.

The researchers discovered that the participants who had been deaf since childhood spotted and responded to the targets much faster than hearing individuals. However, the most interesting behavioural results showed that there is something very different in the makeup of the visual fields of the deaf and hearing participants. The hearing participants’ reaction time slowed down when the targets were on the edges of the visual field but the deaf participant’s reactions stayed equally fast.  This suggests that the deaf people are able to process detailed information from all across the visual field equally effectively, and aren’t as hampered by peripheral vision as their hearing counterparts.

Bottari, D., Caclin, A., Giard, M., & Pavani, F. (2011). Changes in Early Cortical Visual Processing Predict Enhanced Reactivity in Deaf Individuals PLoS ONE, 6 (9) DOI: 10.1371/journal.pone.0025607

Monday Roundup – 27th October 2008 October 27, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
add a comment

ResearchBlogging.org Choosing where to look next

Researchers in Switzerland have some of the features of a scene that might shift our attention,according to a paper in October’s issue of the journal Cortex [1].

The researchers studied healthy volunteers as well as a number of patients that had suffered damage to the right hemisphere of the brain. Of those suffering brain damage, around half suffered from spatial neglect, a condition that leaves a person unaware of objects in one side of the visual field.

The researchers found that the involuntary eye movements (saccades) of healthy and brain damaged volunteers without neglect tended to land on areas of an image that had high contrast and many edges.

The neglect patients on the other hand were much less likely to land on areas with high contrast and many edges in the side of the visual field that they neglected. However, on their ‘good’ side, these patients tended to fixate on areas of the image that were rich in edges.

According to the authors of the study, these results suggest that attention is paid to the whole scene, in order to determine where the eyes will fixate next.

Surprise moves eyes

A paper in an upcoming edition of Vision Research [2] shows that a measure based on the statistical properties of an image can predict where people will direct their attention in a moving image.

Laurent Itti and Pierre Baldi showed short video clips to eight volunteers and measures 10,192 eye movements. They also analysed the videos in order to extract low level features such as contrast, colour and movement in each area of the image.

The researchers found that, using Basyesian statistics they could identify areas of some frames in which these low level features changed in a way that was unexpected given what came before. These areas in the images were most likely to be fixated on by the volunteers when viewing the videos.

The authors said: “We find that surprise explains best where humans look. [Surprise] represents an easily computable shortcut towards events which deserve attention.”

Rivalry rivalry reconciled

Vision researchers often use ‘rivalry stimuli’ in order to study people’s consciousness of a visual stimulus. These types of stimuli have two (or more) interpretations, and viewers’ perceptions will ‘flip’ between one and the other. There are two types of rivalry stimuli: binocular and perceptual and, until recently, it was thought that these acted on conscious perception in different ways.

In the 1960s, Willem Levelt laid down a set of propositions that determine the strength and rate of reversal in binocular rivalry – where each eye receives a different image. Levelt described the ways in which binocular rivalry stimuli could be manipulated in order to change the viewer’s conscious impressions of them.

The upper right and lower left squares can both be seen as the "front" face.

The Necker Cube: The upper right and lower left squares can both be seen as the `front face'.

In perceptual rivalry, the same image is presented to both eyes, but the viewer is still conscious of two or more different interpretations because the image is ambiguous. The most famous example of perceptual rivalry is the Necker cube.

Now, in a paper in PLos One [3], Christiaan Klink and colleagues at the Helmholz Institute in Utrecht have shown that the same rules hold for perceptual rivalry as for binocular rivalry. The authors explain that, although the two types of rivalry stem from very different types of input, “the computaltional principles just prior to the production of visual awareness appear to be common to both types of rivalry.”

[1]R PTAK, L GOLAY, R MURI, A SCHNIDER (2008). Looking left with left neglect: the role of spatial attention when active vision selects local image features for fixation Cortex DOI: 10.1016/j.cortex.2008.10.001

[2] L ITTI, P BALDI (2008). Bayesian surprise attracts human attention Vision Research DOI: 10.1016/j.visres.2008.09.007

[3] P. Christiaan Klink, Raymond van Ee, Richard J. A. van Wezel, David C. Burr (2008). General Validity of Levelt’s Propositions Reveals Common Computational Mechanisms for Visual Rivalry PLoS ONE, 3 (10) DOI: 10.1371/journal.pone.0003473

Monday Roundup: 20th October 2008 October 20, 2008

Posted by David Corney in Uncategorized.
Tags:
add a comment

ResearchBlogging.org Seeing what isn’t there

When we see something, our visual system rapidly recognises its shape, colour, location and so on. These features are then “bound” to the object, so we usually see a “red ball” rather than separate “redness” and “ballness”.

Work by Otto, Ögmen, and Herzog [1] shows that this is not always the case. They use short films of moving lines to show that we can recognise features of one object, but then bind them to another.

The feature studied here is the “slope” of a line, either left, right or vertical. They start by reproducing a previously-known perceptual effect, where a vertical line is rapidly replaced by two flanking lines. When this happens, the original vertical line is rendered invisible, but they go on to show that certain features of the invisible line may linger and be “incorrectly” bound to other lines. Essentially, a vertical line appears to be sloping, if it is grouped with a previous line that was sloping.
What’s more, this happens even when the original sloping line is itself invisible.

They authors say: “how features are attributed to objects is one of the most puzzling issues in the neurosciences”. This paper doesn’t exactly solve the puzzle, but perhaps tells us which way up some of the pieces go…

Depth perception with binocular disparity

There are many depth cues that our visual system uses, such as object size and near objects obscuring more distant objects. Binocular disparity refers to the projection of each point in space onto separate points in our two retinas.

If we keep our eyes fixed on some point, then other points in space around that will project onto each retina. At different distances, objects will be projected onto different parts of each retina.

This depth cue, stereopsis, has usually been thought to only apply to very nearby objects, within a few meters, along with cues like retinal convergence (the eyes cross slightly when looking at very close objects) and accommodation (the lens changes shape, again for very close focusing).

A new paper by Liu, Bovik and Cormack [2] re-examines this. They followed volunteers walking through a forest, and randomly asked them to stop and say what they were looking at. They then used a laser scanning system which measured the distance to that point and to other points nearby. By knowing where someone was fixating, and the distances to every point they could see, the visual disparities of the scene could then be calculated. The results show that stereopsis should work even at quite large distances, because the divergence produced in natural scenes is typically larger than the minimal perceptual threshold.

They also compared these findings with known neuroscience data, and found a match with the MT area in macaques, suggesting that our brains are tuned to the image statistics of disparity. If we are so good at stereopsis, and the visual data is abundant, presumably we use it even for more distant scenes.

In a further study, they showed that the disparity statistics are similar for indoor scenes, as for the forest scenes. But of course, we didn’t evolve indoors, which makes me wonder if we tend to prefer interiors that somehow match the image statistics that we have evolved in response to? Is there a neuroscience of feng shui?!

[1] Otto, T., Ögmen, H., & Herzog, M. H (2006). The flight path of the phoenix—The visible trace of invisible elements in human vision Journal of Vision, 6 (10), 1079-1086 DOI: 10.1167/6.10.7

[2] Liu, Y., Bovik, A. C., & Cormack, L. K. (2008). Disparity statistics in natural scenes Journal of Vision, 8 (11), 1-14 DOI: 10.1167/8.11.19

Monday Roundup: 13th October 2008 October 13, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
add a comment

ResearchBlogging.org Colour blind monkeys are fine at foraging

Fruit foraging may not be the “killer app” for colour vision, according to research published in this month’s issue of PLOSOne [1].

Researchers from Japan, Canada, New Zealand and the UK studied wild black-handed spider monkeys in Santa Rosa National Park, Costa Rica. They used genetic screening to identify nine individuals with full colour vision, and 12 that were red-green colour blind.

Spider monkeys spend 80-90% of their foraging time feeding on fruit. The researchers found that the red-green colour blind monkeys were just as efficient at finding edible fruit as their full-colour vision counterparts.

The researchers measured the colour profile of the fruits and the background foliage and found that the difference in luminance between the two was large enough to show when ripe fruit was present. “The advantage of red-green color vision in primates may not be as salient as previously thought and needs to be evaluated in further field observations,” said the authors.

Technicolour dreams

In the early 20th century results from surveys of dream imagery showed that very few people reported dreaming in colour. However results that date from before the 20th century, and from the 1960s onwards, show the opposite: that colour dreaming is common [2].

In a landmark study in 1942 by Warren Middleton, 71% of college sophomores said that they rarely or never dreamt in colour. When Eric Schwitzgebel replicated the study in 2003 he found that only 17% of students claimed to rarely or never dream in colour.

Eva Murzin from the University of Dundee, Scotland, collected dream data from volunteers that either grew up with predominantly black and media or that did not. Participants were asked to keep a diary of their dreams, or to fill in a questionnaire at a later date.

The results, to be published in the journal Consciousness and Cognition, showed that people over 55 reported a higher proportion of dreams in black and white than those under 55.

The study also showed that, whilst all participants could remember coloured imagery better than back and white imagery, the older participants were better able to recall the details of dreams in black and white than the younger volunteers.

Murzin said: “This result can be interpreted in two ways: either people who claim to have greyscale dreams but have not had experience with such media are simply mislabelling poorly recalled colour dreams or people with early black and white media access misremember the presence of colour in their dreams more easily than people without such experience. This second option could be linked to different expectations and beliefs about dreaming.”

Older brains work differently

Learning to solve visual reasoning tasks like Raven’s progressive matrices is not just a matter of honing existing activity. The parts of the brain used to solve these problems change as we get older [3].

Raven's Progressive Matrix Example
An example problem similar to Raven’s Progressive Matrices, from Wikimedia Commons.

In research to be published in an unpcoming issue of Brain and Cognition, scientists studied the brain activity of 8-19 year-olds solving these types of puzzles. Although children of all ages did equally well on the tests fMRI cans showed that younger children tended to have higher activity in areas of the brain associated with slow, effortful thought. Older participents showed activation consistent with faster, more efficient reasoning.

The study suggests that faster visuospatial reasoning is not the result of brain areas learning to operate more quickly and efficiently. Rather, different brain areas take over as children get better at a task.

[1] Chihiro Hiramatsu, Amanda D. Melin, Filippo Aureli, Colleen M. Schaffner, Misha Vorobyev, Yoshifumi Matsumoto, Shoji Kawamura, Sean Rands (2008). Importance of Achromatic Contrast in Short-Range Fruit Foraging of Primates PLoS ONE, 3 (10) DOI: 10.1371/journal.pone.0003356

[2]E MURZYN (2008). Do we only dream in colour? A comparison of reported dream colour in younger and older adults with different experiences of black and white media Consciousness and Cognition DOI: 10.1016/j.concog.2008.09.002

[3] P ESLINGER, C BLAIR, J WANG, B LIPOVSKY, J REALMUTO, D BAKER, S THORNE, D GAMSON, E ZIMMERMAN, L ROHRER (2008). Developmental shifts in fMRI activations during visuospatial relational reasoning Brain and Cognition DOI: 10.1016/j.bandc.2008.04.010

Monday Roundup – 6th October 2008 October 6, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
2 comments

ResearchBlogging.org

Does your seven month old enjoy snooker?
Infants see causal relationships between events, according to research published in the journal Cognitive Psychology[1].

The authors explain: “When we see a collision between two billiard balls, for example, we do not simply see the cessation of one ball’s motion followed by the onset of motion in the other: instead, we see one ball cause the other’s motion”.

It is not known whether the perception of causal movement is learnt during a person’s lifetime, or whether it has been shaped by evolution. To test the effect it is present in small children George E. Nemwan and colleagues at Yale University studied 7 month-old babies and their perception of causal movement.

The researchers showed the babies short videos of a disk that either to “cause” a pair of disks to move until they became habituated to this type of movement.

The babies were then shown videos of disks that behaved causally, like billiard ball collisions, until they lost interest. They were then shown more videos of “collisions” and videos in which the causal relationship was “broken” by moving one of the disks early or late.

The babies could tell the difference between the two types of events, showing significantly more interest in the “non-causal” videos.

Emotional link to visual awareness

Researchers at the University of Houston, Texas have used binocular rivalry to study the effect of emotion on vision [2].

Twelve subjects were shown pairs of images – one to each eye. The participants’ saw one or the other of the images at any one time, and their perception switched between the two.

The researchers used images from the International Affective Picture System, a library of pictures that have been rated by valence (from “pleasant” to “unpleasant”) and arousal, or the strength of the effect on the viewer’s emotions. A bunch of flowers may is a high valence, low arousal picture, whilst a picture of a badly injured person would have low valence and high arousal.

The study, published in volume 48 of Vision Research, found that the volunteers saw the “nice” pictures earlier and for longer than the “nasty” pictures, as long as the emotional intensity was low. Where the pictures had a stronger emotional effect, the participants saw the unpleasant picture earlier and for longer.

“Our study teases apart the relative effects of the arousal versus valence levels of a stimulus on one’s awareness of it,” Said the authors. “The consequences of not processing a noxious stimulus are direr than the consequences of not processing a pleasing one… On the other hand, if the stimuli are not arousing and therefore, not critical to one’s fitness, the more pleasant stimulus is obviously more pleasurable.”

[1] G NEWMAN, H CHOI, K WYNN, B SCHOLL (2008). The origins of causal perception: Evidence from postdictive processing in infancy☆ Cognitive Psychology, 57 (3), 262-291 DOI: 10.1016/j.cogpsych.2008.02.003

[2] B SHETH, T PHAM (2008). How emotional arousal and valence influence access to awareness Vision Research, 48 (23-24), 2415-2424 DOI: 10.1016/j.visres.2008.07.013

Monday Round-Up September 29, 2008

Posted by Emma Byrne in Uncategorized.
Tags:
add a comment

ResearchBlogging.org

Shape fast – colour slow: spotting changes in rival images

A study published in October’s Vision Research gives more detail about the way colour and orientation information are handled in the visual system [1].

Volunteers were presented with two different images – one to each eye. This results in binocular rivalry: only one of the images is seen at a time and perception flips between the two.

Sandra Veser from Leipzig university, together with colleagues from New Zealand and Cuba looked at the time it took 17 volunteers to spot a change in one of the images. When a change was made to the image that was currently being perceived, the difference was noticed straight away. Otherwise the switch was not seen.

The research team measured the changes in the volunteers’ brain activity (specifically, the event related potentials) when either the orientation or the colour of the bars changed. The response to a change in orientation happened about twice as fast (0.1 second) as the response to a change in colour (0.2sec).

Gene therapy to reduce retinal cell death

Gene therapy may one day help save the sight of patients with detached retinas, according to a study by Mong-Ping Shyong and colleagues in Taiwan [2].

Patients with retinal detachment may still lose their sight, even if the retina is reattached surgically. Previous research suggested that programmed cell death (apoptosis) may be responsible for this loss of vision.

The researchers tested a virus that was modified to express an the enzyme HO-1. This enzyme is known to reduce the rate of apoptosis. They injected the genetically modified virus beneath the retinas of rats with experimentally detached retinas. Compared to rats that had another virus injected into the retina, or that received no treatment at all, the rats treated with the HO-1 producing virus had more photoreceptors and a thicker outer layer of the retina 28 days after treatment.

The researchers suggest that gene therapy may one day lead to better recovery for patients after surgical reattachment of the retina.

Sound strengthens seeing

In a paper in October’s Acta Psychologica, Aleksander Väljamäe and Salvador Soto-Faraco report an experiment that shows that sound strengthens the visual perception of movement [3].

That sound and vision act together to give clues about motion has been known for a long time. The authors give the example of the sliding doors in the film The Empire Strikes Back, which were created using two stills (one of the door open, one of it closed) and a sound effect.

Väljamäe and Soto-Faraco used the motion after effect (similar to the waterfall illusion, where watching water cascade downwards for some time makes the rocks appear to move uphill) to study whether sound reinforces the perception of movement.

Volunteers watched short videos of several flashes in succession. Some of these flashes were made progressively bigger, some were made smaller, so that it looked like the light was approaching or receding. The researchers then measured the motion after effect to see how “strong” the perception of motion had been.

Some flashes were so far apart that they did not give the impression of movement by themselves. ut when they were accompanied by sounds that also seemed to be approaching or receding, the participants experienced motion after effect.

The researchers suggest that fewer frames per second might be needed in videos, as long as the sound effects are closely matched to the images. This could lead to much higher rates of data compression.

[1] S VESER, R OSHEA, E SCHROGER, N TRUJILLOBARRETO, U ROEBER (2008). Early correlates of visual awareness following orientation and colour rivalry Vision Research, 48 (22), 2359-2369 DOI: 10.1016/j.visres.2008.07.024

[2] M SHYONG, F LEE, W HEN, P KUO, A WU, H CHENG, S CHEN, T TUNG, Y TSAO (2008). Viral delivery of heme oxygenase-1 attenuates photoreceptor apoptosis in an experimental model of retinal detachment Vision Research, 48 (22), 2394-2402 DOI: 10.1016/j.visres.2008.07.017

[3]A VALJAMAE, S SOTOFARACO (2008). Filling-in visual motion with sounds Acta Psychologica, 129 (2), 249-254 DOI: 10.1016/j.actpsy.2008.08.004

Lighting in Relation to Public Health September 11, 2008

Posted by David Corney in Uncategorized.
add a comment

Browsing in the library just now, I just came across, and dipped into, a rather elderly book entitled “Lighting in Relation to Public Health” by Dr Janet Howell Clark, published in Baltimore by Williams and Wilkins Company, in 1924. The wonderful “old book” smell came at no extra cost. Something that caught my eye was from Chapter 8, Lighting in Schools. “If possible, unilateral light from the left should be used….” And for rooms that are too wide, “lighting from the left and rear being preferable to lighting from the right and left” (p.92).

Why light from the left? Again, a few pages later, there’s a section on “Lighting Legislation in schools” (p.99). Apparently, designs for new schools should be approved by the State Board of Education or some equivalent body, each state having their own legislation. Examples included, “Indiana: light from left only” and “Minnesota: Light from left, except in very large rooms. Light from east best, west next. North and South light to be avoided in regular study rooms.” Many other states are listed as requiring “light from left, or left and rear” in classrooms.

Why this idea about light coming from the left in schools? I couldn’t see any explanation given in the book, nor any mention of left-light for factories or the home. (Though it did suggest that kitchen lighting should be stronger than dining room lighting, so that crockery is “visibly” clean…) And this amongst plenty of sensible-seeming advice about natural vs artificial light, and the importance of sufficient light in the workplace and so on.

So my first thought was that the author was some crank, or being charitable, that this was some outdated idea that seemed perfectly reasonable in the 1920’s. Anyway, I did a quick google, and via “The Biographical Dictionary of Women in Science” found a little more about the author.

Dr Janet Howell Clark (1889-1969) gained a PhD in Physics from Johns Hopkins in 1913, having graduated from Bryn Mawr College, and was later made professor of biological sciences at the University of Rochester, and Dean of the Women’s College there. Some years later, this post was apparently downgraded from being equal to the Dean of Men, to being subordinate to it (in 1952) so she resigned and returned to Johns Hopkins, eventually retiring at 78.

She was an expert in the effect of radiation on eyesight and diseases caused by low or high levels of lighting (such as glassblowers and iron smelters). She investigated the use of ultraviolet light both as an antibiotic and to prevent rickets in children. She was also greatly concerned about women’s education in general, feeling it was a good thing, and was head of Bryn Mawr school for a number of years, in between various university teaching and research posts.

So clearly not a crank. But why light from the left?

So some more googling, and an explanation that I probably should have thought of myself. In “Electric Lighting” , one Olin Jerome Ferguson of the University of Nebraska, writing in 1920, pointed out that when writing at a desk, it’s helpful if the light doesn’t cast a shadow over your work. So if you’re right-handed, then light from the left would help. And Dr Clark had made it clear that light from the rear means that there’s less glare and excessive contrast for the blackboard at the front of the classroom. So it all makes sense, and I guess it’s so obvious to school designers that Dr Janet Clark didn’t feel the need to explain. My bad.

A final quote of Dr Clark from a chapter on Lighting in the Workplace:

“Cigar factories – Cigars are sorted in respect to color and the varieties of shades of brown are distinguished with difficulty if at all, under yellowish artificial light”
(p.116)

Couldn’t agree more!

What colour chimeras tell us about vision. July 28, 2008

Posted by Emma Byrne in Uncategorized.
Tags: , ,
1 comment so far

ResearchBlogging.org
I managed two impossible things before lunch today[1]. I crossed the road this morning, confident I could avoid the big red bus that I saw in my peripheral vision. I read the links in small-type in the Word Press sidebar, despite the fact that they are in blue text. However, if my retina is to be believed, this shouldn’t be possible.

Colour photo-receptors (cones) are not distributed evenly across the retina. In the fovea, “red” and “green” cones are densely packed, and there are no “blue” cones and no rods. So detailed vision takes place with no (direct) knowledge of short wave light.

However, only 5° of visual angle from the fovea the density of the cones drops dramatically (from around 150,000/mm² at the central fovea to <10,000 per mm²) and the density of the rods rises from none at the fovea to >150,000/mm° at around 15° from the centre of the fovea.

5° of visual angle is about two thumbs’ width when held at arms’ length. Hold up your hands with your thumbs side by side(at arms’ length). Keep your hands forward, so they look something like the “bird” hand shadow. Now fixate on your thumbs. Everything around your thumbs is seen by this area of the retina that is much richer in rods than in cones, whereas the thumbs themselves are seen with the part of the retina that is pretty much all cones. Yet the skin on the back of your hand looks just as colourful as the skin on your thumbs.

So why are colours in the periphery and colours in the centre of the visual field perceived so similarly? The short answer is “because it’s useful”. If the things you saw kept changing colour as they moved across your visual field that would make object identification very difficult indeed. But this doesn’t tell us how the visual system “reconstructs” colour. Is the colour “spread” through the visual field by taking the statistics of the centre and applying them to the surround? Or is there are “top-down” effect, such that knowledge of what is in the visual field tells us how it should look.

Balas and Sinha devised a neat experiment to discriminate between these hypotheses. They decided to address the following questions:

  1. “Do observers complete the colour content of natural scenes when larger regions of the image have had colour artificially removed?”
  2. “If colour completion occurs, does it do so more readily from the centre of an image outwards as opposed from the periphery inwards?”
  3. “If colour completion occurs, does it depend on natural scene statistics?”

To answer the first and second questions they created “Colour Chimeras” – images that were desaturated (“greyed out”) either in the centre or at the edges. Volunteers were presented with images that were entirely grey, entirely coloured (“pan-field” coloured), colour centre chimeras or grey centre chimeras. They found that subjects were much more likely to mistake chimeric images for “pan field” colour images than they were to mistake them for grey images. Importantly, if didn’t matter whether the chimera was greyed out at the centre or the edge: the volunteers still saw a significant proportion of the images as pan-field coloured.

To answer the third question the researchers altered the textural and the colour information in the images. In the first experiment the chimeric and non-chimeric images were all natural scenes (beaches, trees etc.) In the second experiment, some volunteers were presented with natural scenes. Some volunteers were presented with scenes in which the colour had been altered to consist of a single hue, so that the colours were not natural, but shapes within the image were still recognisable. Others were presented with scenes in which the textures were changed, so that the structure of objects was no longer recognisable, but the distribution of colours was the same as the original image. Some subjects were presented with images that had both colout and texture manipulations, in which the original objects and colours could no longer be recognised. How would this affect the volunteers’ ability to spot chimeras?

Volunteers were less likely to “fill in” colour when they were presented with the manipulated chimeras instead of the natural scenes. Textural changes reduced the ability to “spread” colour to the rest of the scene, and colour manipulations reduced this ability even more. However when images were manipulated both for colour and texture, subjects were very good at spotting chimeras (or very bad at filling in colour).

The authors conclude that colour spreading is a common perceptual phenomenon (much more common than the occurrence of “grey spreading” – the mis-identification of chimeras as grey images). Furthermore, they conclude that scene statistics provide important perceptual cues that support this colour spreading. So the next time you see a bus in your peripheral fieldand you know that it’s red, it’s probably because you’ve seen red buses before, and not because your retina tells you so.

[1] I know it should be six before breakfast, but it’s very hot today. Vaughan at Mind Hacks is obviously made of sterner stuff.

Balas, B., Sinha, P. (2007). “Filling-in” colour in natural scenes. Visual Cognition, 15 (7), 765-778. DOI: 10.1080/13506280701295453

Watch where you go: see where you went May 28, 2008

Posted by David Corney in Uncategorized.
2 comments

ResearchBlogging.org

Just a quick one… I just read Leo Peichl’s excellent 2005 review of the diversity of mammalian photoreceptors, and it’s a goldmine of fascinating detail, full of examples and interesting speculation. It summarises his and many other people’s findings over the years, covering classes of photoreceptor, their distribution on the retina, possible evolutionary pathways, and so on. One item stood out though. Apparently, most rodents can see ultraviolet (UV) light (as, incidentally, can plenty of non-mammalian species). It’s not clear why though, given that plenty of rodents are nocturnal and plenty are diurnal: why should they see UV? One suggestion is that rodent urine is highly UV-reflective. As Peichl says, “…rodents might profit from seeing their scent marks in addition to smelling them.”

Which is perhaps bizarre, but makes sense. But how did this evolve? Wouldn’t it be easier to develop say, coloured urine, if seeing it really helps? I think some birds can see UV, so it can’t be a camouflage issue. And what about the first rat to see his own pee?

“Look everyone! I can see where I peed yesterday!”

“You been eating that fermented cheese again, Barry?”

Peichl, L. (2005). Diversity of mammalian photoreceptor properties: Adaptations to habitat and lifestyle?. The Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology, 287A(1), 1001-1012. DOI: 10.1002/ar.a.20262

Moles and their eyes May 12, 2008

Posted by David Corney in Uncategorized.
Tags: , , ,
3 comments

ResearchBlogging.org

I always thought that moles, being subterranean, were virtually blind. Turns out I was right, but their eyes are much more interesting than I would have thought. A new paper by Glösmann et al. in the Journal of Vision taught me lots. Briefly, moles have at least two colour photoreceptor cells (i.e. cones), potentially giving them colour vision in line with most mammals. However, their short-wavelength cone is down-shifted relative to humans, meaning that they can see ultraviolet (UV) light. The lens and cornea of the human eye scatters most blue/violet/UV light, to protect the sensitive retina from potentially damaging UV light. Presumably, if you’re subterranean, then such damage isn’t an issue, so moles have lenses that transmit blue/UV light much better than ours do.

Also, it seems that many / most of their cones co-express both medium- and short-wavelength sensitive opsins (light sensitive proteins). I’d always thought that each cone type only had a single photopigment, so ‘S’ cones just had ‘S’ opsins, and ‘M’ cones just had ‘M’ opsins. Turns out that many mammals, including moles and humans show co-expression of S and M opsins, during at least some stage of their development. Co-expression means that the sensitivity functions are broader than would otherwise be expected, so a co-expressing ‘blue’ cone will be more sensitive to green/yellow light that before, and a co-expressing ‘green’ cone more sensitive to blue light. I suppose that in theory, given three opsins one could have three single-expression cones, plus 6 3 (see comments) dual-expression cones plus 1 triple-expression cone type. Could the rest of the visual system make sense of this? Yes! (I think.) Having more cone types may reduce the spatial acuity, as it reduces the density that any single cone type could have, but increases the colour sensitivity. And if the response functions largely overlapped, then I don’t think the loss of spatial sensitivity would be too great anyway. It might require a few new post-receptoral channels, but as long as each cone gave an essentially unchanging response to any given stimulus, then the rest of the visual system should be able to interpret things correctly.

The Final Fascinating Fact I learnt from this paper is why moles can see at all: the main reason seems to be so they can detect breaks in their tunnels. If something is burrowing in to eat them, or if a passing heavy cow accidentally causes a mini-collapse, the mole has to know so that it can run away or repair the damage. Which makes we wonder: if the soil above part of a tunnel becomes progressively weakened, e.g. by air or water erosion, would UV light get through before visible light? Might UV sensitivity allow a mole to go and fix an otherwise invisible weakness and prevent tunnel collapse? Or is their UV sensitivity merely a left-over from some other evolutionary branch? Or does it somehow help them to simply mess about in boats?

Reference: Glösmann, M., Steiner, M., Peichl, L., Peter , A. (2008). Cone photoreceptors and potential UV vision in a subterranean insectivore, the European mole. Journal of Vision, 8(4), 1-12.

PS Don’t forget, of course, that every mole contains 6.02214×10^23 molecules

Follow

Get every new post delivered to your Inbox.