White’s Illusion Blanket May 6, 2008Posted by Emma Byrne in Uncategorized.
add a comment
Just for fun, I made this pattern for a White’s illusion blanket. I haven’t made it yet (and this will be the first big thing I’ve ever knit) so I have no idea how many balls of wool it will take. I’ll post a photo and more details when it’s done.
Tags: distance, ecological learning, functional model, perception
This Sunday, I’ll be taking part in a little stroll around London.
I’ve been training for this for a few months now, and I’ve noticed something in my longer runs. I become very bad at estimating how long it will take to pass a given point. Like many long distance runners, I tend to resort to distraction techniques as the fatigue kicks in. I pick a paving slab or pavement and “bet” myself how long it will take to pass that point. I find that I consistently overestimate how long it will take to reach that point – a lamppost that I think is 10 seconds away tends to be reached in seven. This effect gets more pronounced the longer I run. I’ve thought of several reasons why this may be:
- I consistently round up times to the nearest block of five seconds.
- I count seconds too slowly.
- I’m bad at estimating the distance to a landmark, especially when tired.
Hypothesis three seemed plausible, especially when I remembered reading a nice study by Proffitt et al (2003) that demonstrated (inter allia) a significant increase in perceived distance when laden with a backpack than when not. Participants (n=24) were asked to give a visual reckoning of distance from themselves to a cone in a flat, grassy field. Half of the participants were unladen for the duration of the task; the other half wore a backpack throughout. Proffitt et al found that all participants underestimated distance, but the laden participants gave significantly higher estimates of distance than the unladen ones.
Perception of distance is usually thought of as being due to a combination of visual cues, such as occlusion, perspective, relative size and height, parallax, texture and brightness. However, there is a model that suggests the perception of egocentric distance is also functional.That is, what we really estimate is some function of the effort it will take to travel from “here” to “there”. Poffitt et al summarise this nicely:
“Berkeley concluded that perception of distance must be augmented by sensations that arise from
eye convergence and from touch. For egocentric distances, tangible inforformation arises from the effort required to walk a distance, and thus, effort becomes associated through experience with visual distance cues…”
They contrast the “functional” model of distance perception with the “geometric” model of visual perception:
“[In] complex, natural environments viewed with both eyes by moving observers, there is sufﬁcient information… to specify egocentric distance. Thus, a role for effort in perceiving distance seems unnecessary if the goal of perception is to achieve a geometrically accurate representation”.
Which, of course, it isn’t. The “goal” of perception (or at least, the reward associated with perception) is the ability to interact effectively with the environment. The backpack result presented by Proffitt et al supports the argument for a functional model of distance perception: what is estimated is not distance directly, but the effort required to cover that distance.
So far so plausible. But results in Hutchinson and Loomis, (2006a) and Woods and Philbeck, (2006) appear to cast doubt on the reproducibility of the backpack effect.
Hutchinson and Loomis made two departures from the original method. The first difference was the use of a within-participants setup (each participant made estimates in the laden and unladen condition). The second difference was that participants were told that they would have to estimate the size of the target, rather than walk to it (Proffitt et al, 2006). Hutchinson and Loomis (2006b) state that they failed to find any effect in the between-participant comparisons, and that the original Proffitt et al study did not prompt the participants to anticipate walking to the target.
Woods and Philbeck (2007) also fail to find an effect. Their method also differs from the original, in that they asked some participants to rate their (anticipated) effort on the Borg CR10 scale. However, levels of anticipated effort were also not significantly different between laden and unladen conditions. Which, to me, raises the question of whether or not the backpacks were sufficiently heavy. There certainly doesn’t seem to be anything in that result that invalidates the functional model of distance perception: no difference in effort = no difference in perceived distance.
What can be made of these differences? There are several other studies, many by Proffitt and Bhalla that support the functional model of distance (and slope) estimation. A functional model of distance perception is prima facie plausible – experiential learning is more likely to build a model of distance based on exertion (which is directly accessible to the organism) than a geometric measure of distance (not directly accessible, despite the invention of the ruler). However, the backpack results at least have not gone unquestioned.
So where does this leave me on Sunday? I haven’t had time to carry out a complete literature review of the effects of glucose depletion/aerobic exercise/fatigue/endorphin release on perception- but if someone wants to sponsor me to run next year’s marathon, I’ll happily participate in psychophysics experiments en route. And to be on the safe side, I’ll be running this year’s Marathon carrying as little as possible.
Hutchison, J. J., Loomis, J. M. (2006a). Does energy expenditure affect the perception of egocentric distance? A failure to replicate Experiment 1 of Proffitt, Stefanucci, and Epstein (2003) [Abstract]. Journal of Vision, 6(6):859, 859a, doi:10.1167/6.6.859.
Jeffrey J. Hutchison, Jack M. Loomis (2006b), Reply to Proffitt, Stefanucci, Banton, and Epstein, The Spanish Journal of Psychology, Vol. 9, No. 2, 343-345, [PDF]
Proffitt, D.R., Stefanucci, J., Banton, T., Epstein, W. (2003). The role of effort in perceiving distance. Psychological Science, 14(2), 106-112. DOI: 10.1111/1467-9280.t01-1-01427
Dennis R. Proffitt, Jeanine Stefanucci, Tom Banton, and William Epstein (2006) A Final Reply to Hutchison and Loomis, The Spanish Journal of Psychology, Vol. 9, No. 2, 346-348, [PDF]
Woods, A. J., & Philbeck, J. (2007). Does perceived effort influence verbal reports of distance? [Abstract]. Journal of Vision, 7(9):421, 421a, doi:10.1167/7.9.421.
Colour perception and colour blindness March 7, 2008Posted by David Corney in Uncategorized.
Tags: colour blindness, colour vision, Munsell
add a comment
A short post about three of my favourite things: colour vision, Matlab and the Tokyo subway system.
I’ve recently been reading Jeff Mather’s blog about photography and in particular, Matlab-based image processing. He describes a recent presentation by Yasuyo Ichihara about colour perception by people who are colour-blind, or “colour confused” as they seem also to be called. She (Ichihara) is part of a not-for-profit group called the “Color Universal Design Organization” (CUDO). They promote the sensible and considerate use of colour for things like presentations, graphs, street signs… and the Tokyo subway system. The same group includes Masataka Okabe and Kei Ito, who have a nice summary about the importance of colour design, including a potential importance to academics: “There is a good chance that [any] paper you submit may go to colour-blind reviewers. Suppose that your paper will be reviewed by three white males (which is not unlikely considering the current population in science), the probability that at least one of them is color-blind is whopping 22%!” Worth bearing in mind!
They also point out (more…)
Some parts of the visual field are more equal than others February 13, 2008Posted by David Corney in Uncategorized.
Tags: evolution, Fovea, psychophysics, retina, visual cortex
1 comment so far
It’s well-known that visual acuity is far higher in the centre of the visual field than the periphery. Something we see out of the corner of our eyes is blurred, until we turn our eyes to look directly at it, when we can see it much more clearly. This is partly due to the sparse distribution of cones in the periphery, but also due to later neural structures. For example, the visual cortex has more neurons dedicated to the central than the peripheral visual field. That much I knew. However, I just read a paper that describes many other non-uniformities in the visual field, which are rather less intuitive, at least at first glance.
The paper, by Fuller, Rodriguez and Carrasco, starts with a detailed review of what is already known about various asymmetries in perception, including the peripheral drop in acuity I just mentioned. But some other examples of asymmetry were new to me. For example, we have better acuity along the horizontal mid-line of the visual field than we do along the vertical mid-line (known as “horizontal-vertical asymmetry”). So if you fix your eyes on one point, you have (slightly but measurably) higher acuity 5 degrees left or right than you do 5 degrees up or down. Similarly, we have better acuity below the mid-line of the visual field than above (known as “vertical meridian asymmetry”). Again, these effects are due to both the non-uniform distribution of photoreceptors in the retina, and to the characteristics of the visual cortex and the rest of the visual pathway.
In the work presented here, the authors presented subjects with pairs of gratings (alternating dark and light bars) above and below a fixation point. The subject then had to decide which of the pair was of higher contrast, and whether its bars sloped to the left or the right. Over a large number of trials, they found a significant bias towards people choosing the “south” grating (the one below the centre of the visual field) as being the higher contrast one, even when it was physically identical to the “north” grating.
They then varied the experiments by providing an extra cue before the gratings appeared, either above of below the mid-line. The idea was to test whether “exogenous” attention (i.e. automatic pre-conscious attention) effected the visual asymmetries.
They found that this kind of peripheral cue exaggerated the perceived difference in contrast. So a stimulus that grabs your attention appears to have a higher contrast than it would do otherwise, and also that increase in attention is greater if it’s in the bottom half of your field of view.
The authors only briefly touch on why this all happens at the end of the paper. They comment that things on or near the ground in front of us may tend to be more important than things in the air – presumably because they’re much closer, and so require a faster fight-or-flight style response. The authors also question how this effect might vary during childhood: as one grows from being really short to being adult-sized, does the likely location of threats / rewards change?
This fits in with the whole ecological view of perception that I find fascinating, namely that we perceive the world in a way that has led to our (ancestor’s) evolutionary survival, irrespective of whether that perception happens to be “accurate”. I wonder if the area of ground in front of you that is worth paying extra attention to grows over time? Is this effected by your growing motor skills as well? If you’re a child, you’re not going to be able to run very far or very fast, so perhaps it makes sense to pay less attention to things happening, say, 50 feet away, compared to an adult with longer legs. And a similar argument holds for the “horizontal-vertical asymmetry”: things on the horizon would tend to be more significant than things above us if our ancestor’s were used to hunting or running away from land-based animals, like gazelles and lions. It is a bit of “evo-psych” style speculation, but a few computer simulations might shed some light on the issue…
Fuller, S., Rodriguez, R.Z., Carrasco, M. (2008). Apparent contrast differs across the vertical meridian: Visual and attentional factors. Journal of Vision, 8(1), 1-16. DOI: 10.1167/8.1.16
I just wasn’t smiling at you*. January 31, 2008Posted by Emma Byrne in Uncategorized.
Tags: Attractiveness, Cognition, Gaze
1 comment so far
This isn’t specifically about vision, but Strick et al  (Cognition) report on a study of the effect of stimuli paired with pictures of attractive or unattractive faces with gaze averted from or directed towards the viewer.
In the first trial the images of faces were paired with unknown peppermint brands and in the second the images were paired with positive and negative adjectives. It was ensured that volunteers attended to the faces by the inclusion of images of a third condition (faces with eyes closed). When one of these was detected, the volunteer had to press a button.
To determine the effect of attractiveness and gaze on desire, the volunteers were asked to rate the peppermint brands for desirability at the end of the test. Mean self-reported desire for brands associated with the attractive/direct gaze condition was 4.46 out of a possible 7 and for the attractive/averted condition was 4.05. Direct gaze resulted in significantly higher desirability where the face was attractive (p= .04). For the unattractive faces, averted gaze led to fractionally but not significantly higher ratings than direct gaze.
The adjective experiment is an affective priming test. In affective priming, respondents must indicate whether an adjective they read is positive (‘exciting’, ‘happy’) or negative (‘boring’, ‘angry’) by pressing one of two keys. Responses to positive adjectives are generally faster than responses to negative adjectives, but priming can exaggerate or eradicate this difference. The experiments showed that attractive faces lead to a greater difference in response time than unattractive faces (a mean difference between positive and negative judgements of 55ms for attractive faces versus 36ms for unattractive faces, p =0.4). Attactive faces with direct gaze also made the response time difference significantly longer than unattractive faces with direct or averted gaze (both p < .02). However for unattractive faces there was no significant effect on the difference in in response times.
This doesn’t seem to have made the same splash as last November’s press coverage of the Conway et a  (Royal Soc) study that showed the same face is more attractive when gazing directly at the viewer than when not. It seems that for some of us, the power of our gaze may not be all we’d hoped it might be. Especially if we’re trying to sell mints.
Strick, M., Holland, R.W., van Knippenberg, A. (2008). Seductive eyes: Attractiveness and direct gaze increase desire for associated objects. Cognition, 106(3), 1487-1496. DOI: doi:10.1016/j.cognition.2007.05.008
C.A. Conway, B.C. Jones, L.M. DeBruine, and A.C.Little. “Evidence for Adaptive Design in Human Gaze Preference.” Proc. R. Soc. B 275, no. 1630 (January 2008): 63-39.
* You Could Have it So Much Better, Franz Ferdinand.
“The last message you sent said I looked really down,
and that I ought to come over and talk about it. Well,
I wasn’t down;
I just wasn’t smiling at you.”
Kiwi and night vision January 29, 2008Posted by David Corney in Uncategorized.
Tags: nocturnal vision
A (fairly) new paper by Graham Martin et al. in the wonderful PLoS ONE discusses kiwi and their eyes. Coming from (nearly) the opposite side of the world, I am (was) embarrassingly ignorant about kiwi. I knew they were large and flightless birds from New Zealand, but that was about it. (I even thought the plural was “kiwis” .) I now know that they’re nocturnal, like quite a few birds, but they’ve evolved surprising eyes.
To see in the dark, many nocturnal animals have evolved relatively large eyes, such as owls, lemurs and some monkeys, to gather what little light there is. Against this however, eyes are heavy, being balls of mostly-water, and weight is always a concern if you’re flying. So at first glance, you might expect (as the authors mention) that a bird that stops flying might evolve to have larger and larger eyes, as weight becomes less of an issue. Especially if it’s nocturnal. However, kiwi have small eyes for their bodies, and what’s more, they have small optic nerves and small visual cortices. They’re not blind like cave fish, although given a few million years more, who knows?
Moving forward a few inches, all birds have nostrils, usually at the base of the bill or even inside the mouth. Kiwi, uniquely, have their nostrils at the tip of their bills, coupled with fine touch sensors all over the bill tips. They feed by pecking at surface-living insects or by probing the soil with their long bill and sensing underground insects, suggesting a convergent evolution to the same ecological niche filled by mammals in many parts of the world. And if you’re finding grubs underground, you don’t need vision, of course.
According to the “wiki-kiwi” page, in areas where people are absent, kiwi are active during the day. Which makes me wonder if they have ended up with reduced visual processing simply because they can’t see what they’re eating anyway, whether it’s day or night, so why waste the effort? It seems to me that there are two evolutionary stories that fit this data:
- In version one, kiwi evolved to find food in the topsoil with their beaks and so didn’t need good vision; in turn, they spent less energy growing and using eyes; and then finally they tended towards nocturnal behaviour because there was no extra cost to them.
- In version two, kiwi became nocturnal to avoid predators (not that there were any mammals to compete with until recently) or to find nocturnal insects perhaps; and then they developed poorer eyesight because good eyesight was no longer required.
It could of course be some mixture of the two, as evolutionary histories needn’t have a nice clear narrative. In either case, I guess they still need at least rudimentary vision for mate selection, not walking into trees, that kind of thing. Anyway, I’ve learned a lot about kiwi, for which I am grateful!
Martin, G.R., Wilson, K., Wild, J.M., Parsons, S., Kubke, M.F., Corfield, J., Iwaniuk, A. (2007). Kiwi Forego Vision in the Guidance of Their Nocturnal Activities. PLoS ONE, 2(2), e198. DOI: 10.1371/journal.pone.0000198
See also: “The allometry and scaling of the size of vertebrate eyes” doi:10.1016/j.visres.2004.03.023
“Some nocturnal animals rely on senses other than vision, which is reflected in their small eye size. Others take the strategy of increasing eye size as much as possible to compensate for the low light conditions.”
 I checked the OED to see what it said about the word “kiwi”. It gives the etymology as “Maori” which isn’t terribly informative, but does have a quote from a Walter Lawry Buller and his 1873 text, A history of the birds of New Zealand: “Last Sunday I dined on stewed Kiwi, at the hut of a lonely gold-digger.” So I’ve learned something already…
Classic paper: The evolution of the human eye January 24, 2008Posted by David Corney in Uncategorized.
Suppose that one type of eye is better at detecting information about the world than another. Then it will allow its owner to respond more usefully to the visual world, whether it’s finding food, avoiding predators and harsh conditions, monitoring its own movement, or whatever. As always, natural selection works at a local level, local both in space and time. So it doesn’t matter whether this eye is good, only whether it is better than the competition around it. Half an eye is better than no eye. In fact half an eye is better than 49% of an eye…
Darwin admitted that the evolution of something as intricate as the human eye, by nothing more than natural selection, seems “absurd”. I love science when it’s so counter-intuitive! I know that everything in the universe is made out of atoms and quarks and things, but it doesn’t look like it, intuitively. Stars at night don’t look like they’re billions of miles away. The Earth doesn’t look round. Anyway, back to the eye…
Back in 1994, Nilsson and Pelger presented a model of the possible evolution of eyes, consisting of a sequence of steps where each step is a) small enough to happen in one generation; and b) leads to an improvement in spatial acuity. They start with a simple, flat light-sensitive circle and see where it leads. Basically, the sequence starts with an increasing central depression, which means that light from either side gets cut off by the surrounding “ridge”. Thus the light falling on any one part of the patch is coming from a smaller and smaller region of the world, as the depression gets deeper. This “specialisation” defines an increase in spatial acuity. Then later, the “ridge” surrounding this central depression constricts, ultimately forming a “pupil”, and further limiting the amount of “world” whose light reaches each part of the light-sensitive “retina”.
Unfortunately, this process inevitable cuts down the total amount of light reaching the retina, so you end up with an eye that has really high spatial acuity, but that only works in really bright conditions… But if you add a clear protective covering over the initial light-sensitive patch, and if this covering gradually thickens just so, then it will start bending the light in such a way that acuity improves further without having to make the pupil any smaller. This “lens” allows a trade-off between acuity and sensitivity, so you can see clearly now, even in lower light levels.
Through various plausible, even pessimistic, assumptions, Nilsson and Pelger argue that you could get from the initial flat patch of light-sensitive cells to a fully-functioning vertebrate eye in just 1829 steps! Then with some more (plausible) assumptions about evolution and inheritance, they estimate this could occur in a little over 350,000 generations, something like 1500 times faster than the evolution actually managed.
The upshot is a nice straightforward explanation of the development of many different sorts of eyes through natural selection. Whether the historical evolution of humans involved this sequence is not clear, but that hardly matters: here is one way it could have happened, an existence-proof of the possible evolution of the eye.
In the paper, they also point out it could, theoretically, have happened a lot faster, if for example different changes happened in parallel rather than sequentially. Given ideal conditions, a truly benevolent ecology, I wonder just how quickly such an eye could evolve? It’s a big universe out there: what’s the record fastest evolution of an eye, from a standing start, I wonder…?
Nilsson, D., Pelger, S. (1994). A Pessimistic Estimate of the Time Required for an Eye to Evolve. Proceedings of the Royal Society B: Biological Sciences, 256(1345), 53-58. DOI: 10.1098/rspb.1994.0048
Blind as a fish January 18, 2008Posted by David Corney in Uncategorized.
add a comment
Having just written a few days ago about nocturnal vision and moths that can see by starlight, I was intrigued to read about some blind cave fish via Living the Scientific Life. The theory is that over millions of years of living in perpetual darkness, these fish mutated until they no longer grew eyes, either to save the effort and energy, or because eyes are delicate and prone to infection. The news is that by taking fish from different populations and breeding hybrids, fully-functioning eyes reappeared in a single generation! It seems that each population mutated in a different way, so at least some hybrid offspring had all the “original” seeing genes still there. Populations matter in genetics.
One thought: if moths can see (in colour, no less) by something as faint as starlight, and these different populations of fish independently gave up having eyes altogether, those caves must be very dark. Very dark indeed…
I liked the final quote from the paper, too:
This observation underscores the power of a well defined environment to repeatedly direct the evolution of the same end phenotype, regardless of initial genotype.
Full paper: Richard Borowsky. Restoring sight in blind cavefish (2008). Current Biology 8 R23-R24 doi:10.1016/j.cub.2007.11.023
Problem solving? Active vision actively helps. January 15, 2008Posted by Emma Byrne in Uncategorized.
Tags: Active Vision, Cognition
add a comment
Steve Higgins over at Omni-Brain takes a recent paper by Thomas and Lleras as the subject of a great post. According to Higgins’ review, the study showed that subjects solved a problem much more quickly when directed to make one pattern of eye movements over others.
The comments thread is shaping up to be interesting too – is the effect due to embodied cognition (the eyes “acting out” the movements that the correct solution consists of) or is it to do with inter-hemispheric communication?
Definitely worth a look for Higgins’ great summary. The full paper is available here – no Athens password required.
Night and day January 11, 2008Posted by David Corney in Uncategorized.
Tags: illumination, insect vision, nocturnal vision
add a comment
Although I’ve been researching vision and creating synthetic images for a while now, I’d never really thought about night-time illumination. Well, not beyond thinking, “It’s dark at night!” I suppose. But then I read a paper  by Javier Hernandez-Andres and his group in Granada about crepuscular and nocturnal vision, and learned that natural light at night is a lot more complex (and interesting) than I thought. During the day, all light come from the sun or from skylight, which is just scattered sunlight. But at twilight and through the night, there are many and varied light sources.
After the sun passes below the horizon, it still lights up the sky for a while so that’s one source of light. Then there’s moonlight, which is a direct reflection of the sun and has a very similar spectrum to daylight, at least for a high and full moon. And there’s starlight, which has a spectrum roughly like daylight but fainter and with four distinct spikes around the yellow / red region. Then illumination starts getting really exotic. There’s “airglow”, which was first (officially) noticed by Anders Ångström (he of the unit) in the mid-19th century. It consists of various light-emitting molecular processes in the upper atmosphere, which produces a faint blue-ish glow across the sky. Then there’s “zodiacal light”, which was noted by Cassini (he of the Saturn orbiter) in the 17th century. It consists of sunlight bouncing off scattered cosmic dust between the planets of our solar system, so again it has the same spectrum as sunlight, albeit fainter. And apparently, the very dark blue sky seen during late (“nautical”) twilight is that colour because of ozone absorption, and not (just) due to sunlight scattering effects. In other words, it’s not just “blue sky but a bit darker”, but is blue for a different reason.
The final nocturnal light source Hernandez-Andres et al. mention is anthropogenic light – light pollution. This varies enormously across space and time of course, with a strong yellow/red shifted spectrum suddenly appearing whenever a million streetlights click on at dusk, along with car headlights, office lights, advertising hoardings etc. etc. Scientists are now realising that many nocturnal animals, including some moths, rely on very subtle colour cues for foraging and mating, just as diurnal animals do. What effect light pollution is having on these creatures seems to be unknown, but presumably it forms a strong selection pressure, at least near built-up areas. Sounds like a ripe area of future study…
Johnsen, S. (2006). Crepuscular and nocturnal illumination and its effects on color perception by the nocturnal hawkmoth Deilephila elpenor. Journal of Experimental Biology, 209(5), 789-800. DOI: 10.1242/jeb.02053