Mar 02

About that dress . . .

Dress-Tumblr

In case you haven’t been paying attention recently, the dress shown in this picture is the latest viral craze.  Is it black and white, as some people perceive it, or black and blue, as other people perceive it?  It’s actually black and blue.  But why are there such differences in perception?  Inquiring minds across the world demand to know.  And we’ve seen many answers, on TV and all over the web—many from neuroscientists.  Unfortunately, all of the answers I have seen are at least partly wrong.

I’m in a good position to answer this not so much because I’m a neuroscientist (although that helps) as because I contributed for years to software development for the GIMP project.  GIMP is the open-source equivalent of PhotoShop; that is, it is a program for manipulating digital images.  In order to work on it I had to learn a lot about digital photography—and this dress thing is really more about photography than about human visual perception.

If you look at the photo above, you can see that it is backlit.  The light, in other words, comes mainly from behind the subject.  That’s a situation that always causes problems, particularly when the subect is dark.  If the camera’s shutter setting is governed by the overall light in the picture, then the subject shows up as almost black; if the shutter is opened wide enough to make the subject visible, then you get diffusion of light from the background making everything washed out.  One way or another, unless you do very sophisticated manipulations, the result is that colors end up severely desaturated—as happens with the blue of the dress here.

Why do some people perceive this and others not?  I believe it is largely a matter of familiarity with photography.  Nowadays huge numbers of people own smart phones and take pictures at a rate of dozens per day—so they get many chances to compare what they see with their eyes with what their phones see.  They learn what works and what doesn’t.  They don’t necessarily learn it explicitly, but they develop the ability to tell when an image is distorted.  To a person familiar with photography, it is quite obvious that the dress picture is severely washed-out, and that the true colors are much more saturated than the image shows.  To a person who hasn’t taken thousands of phone-pics, this mental compensation might not occur.

The explanations that I have seen from neuroscientists have all been based on the phenomenon of color constancy:  the ability of the human visual system to compensate for lighting in order to judge the true color and brightness of an object.  That’s a very important phenomenon, and because it takes place largely below the level of conscious awareness, many people don’t appreciate how powerful and pervasive it is.  But it isn’t the explanation of this particular effect.  This effect does involve a compensation process, but of a different sort.

 

Older posts «