Colonnade Soffit Artwork
The Sainsbury Wellcome Centre colonnade features some 950 polycarbonate pixels with different visual representations when viewed from the East or West.
Artwork viewed when looking West
Music pixels reproduce the score of Johann Sebastian Bach’s J.S. Bach’s Musical Offering (1747):
Ricercar a 3 - regarded as an extraordinary expression of human imagination.
Artwork viewed when looking East
Portrait pixels depict 11 separate winners of the Nobel Prize in Physiology or Medicine affiliated with University College London whose rigorous scientific investigations have benefitted humanity.
- Professor Archibald Vivian Hill 1922
- Sir Frederick Gowland Hopkins 1929
- Sir Henry Hallett Dale 1936
- Professor Peter Brian Medawar 1960
- Professor Francis Harry Compton Crick 1962
- Professor Andrew Fielding Huxley 1963
- Professor Sir Bernard Katz 1970
- Professor Ulf Svante von Euler 1970
- Professor Sir James Black 1988
- Professor Bert Sackmann 1991
- Professor Sir Martin Evans 2007
Art installations in the vitrines of the Sainsbury Wellcome Centre seek to engage passers-by in the Centre's work through a series of visual illusions that highlight intriguing aspects of perception. The five exhibits, which were designed by Marty Banks Consulting (Marty Banks, Hany Farid, Maria Mortati), are based on the following topics: illusion, distortion, inversion, deception and perception.
This exhibit illustrates how the brain’s assumptions about shape and lighting affect perception. Two different types of object, one convex and illuminated from above and one concave and illuminated from below, generate very similar images that are consistent with a face. We perceive normal convex faces in both cases even though they are not normal faces at all.
The retina of the eye is two-dimensional and the world is three-dimensional. This means our brain has to construct the third dimension from inputs beyond what is transmitted to our eyes. In order to do this, our brain makes a series of assumptions often based on prior experience.
We assume that the illumination of an object comes from above because that is almost always the case. We also assume that objects are typically convex (not hollow). In the rare cases in which these assumptions do not hold, the physical world can look quite peculiar.
In constructing the third dimension from the two-dimensional image in the eye, the brain uses assumptions about the world. In the demonstration, the main assumptions are that illumination comes from above and that objects are usually convex. The concave (hollow) face illuminated from below makes us aware of these assumptions by violating them. The combination of concavity plus light from below creates the illusion that the concave face is convex. As you move left and right, the concave face appears to rotate as if it is following you. An example of this illusion can be found here. In his well-known book, The Intelligent Eye, the British psychologist, Richard Gregory, had this to say about the hollow-face illusion, "The strong visual bias of favouring seeing a hollow mask as a normal convex face is evidence for the power of top-down knowledge for vision".
The figure below shows two identical photographs, one rotated 180 degrees . The left-hand photograph appears to show a large depression with a small hill in the middle: a crater. The right-hand photograph appears to show a large raised cone with a smaller depression in the middle: a cinder cone. We have quite different interpretations of two identical photographs. We normally assume that the light comes from above and we interpret the pattern of brightness variations accordingly. In both images, we assume that the brighter regions are facing toward the sun while the darker regions are facing away from the sun. The perceived shape follows from those assumptions.
This exhibit illustrates how numerous three-dimensional objects can generate a given two-dimensional image in the eye, yet we still perceive one particular object among the many possible ones. Here two different three-dimensional objects generate images consistent with a bicycle. We perceive bicycles despite the fact that two of the objects are not at all similar to a conventional bicycle.
The world is three-dimensional. The eye’s retina is two-dimensional, yet we are able to generate compelling three-dimensional percepts from the images formed on our retina. We do so by calling upon our extensive knowledge of the world. We perceive the most familiar 3D object that is consistent with the 2D image in the eye.
Our eyes form an image of the 3D world onto the 2D retina, like a camera forms an image. This process is called perspective projection. Because a dimension is lost in this process, there are many 3D objects that can create precisely the same 2D image. Despite this, we perceive the 3D object that is most familiar and not other plausible objects that could have generated the same 2D image.
In 1934, the American scientist Adelbert Ames used techniques of perspective projection to create the Ames Chairs. (Ames also created the well known Ames Room and Ames Window.) The Chairs had no right angles or parallel lines, but when viewed from a particular position, the retinal images they created were identical to the images created by normal chairs. Remarkably, people perceived the Ames Chairs as normal chairs even though they were strikingly dissimilar from such chairs. A schematic is shown below. The lower row shows what the three chairs look like from the correct vantage point. The upper row shows the three chairs seen from an incorrect vantage point.
The figure below illustrates the ambiguity of perspective. On the left is one view of a hallway. And on the right are two other views of the same hall. The fact that the paint is disjointed but is still perceived as undistorted when viewed from this second position illustrates that despite perspective ambiguity, people experience the simplest, most familiar object.
When the two faces are upside down, they appear normal. But when they are upright, you can see that one is very wrong. We have evolved special mechanisms for distinguishing and recognising faces. We almost always see faces right-side up, so those mechanisms work best with upright faces.
On a day-to-day basis we are exposed to a vast array of faces, from the familiar to the unfamiliar. We easily recognise people whom we know, and can easily detect the presence of a face in our surroundings. We have a great deal of visual experience with many common objects, but our experience with faces is almost always with upright faces. This has led to a peculiar and engaging aspect to face perception.
Our perception of faces most likely consists of two distinct processes. The first is local processing of individual facial features such as the mouth, nose, and eyes. The second is global processing of the relationship between these features. These processes are incredibly sensitive and effective, but seem to struggle when faces are seen upside down.
Dr. Peter Thompson (York University) serendipitously created a remarkable illustration that provides some insight into how our visual system processes faces. Panels A and B are slightly different photos of the late Prime Minister Margaret Thatcher. Although you may be able to notice slight differences between the two photographs, both images are recognisable. Panels C and D are the same faces rotated to be right-side up (if you don’t believe us, rotate your computer/device). Now you can see that something is grotesquely wrong with the photos in panels A and C: the eyes and mouth have been flipped relative to the rest of the face.
It seems that when the face is upside down, global processing of the relationship of facial features fails to operate properly. This may occur because we have very little experience with, and therefore need to recognise, inverted faces. There is, however, still considerable debate surrounding what this failure of our visual system means. As Dr. Thompson has said "it's not that we now understand more about face processing because of it, but rather we appreciate better how puzzling the problem is.”
Like sound, the visual world is made of varying frequencies. At long distance, high-frequency textures (i.e., fine detail) cannot be resolved. At short distance, such textures emerge and dominate our perception. Composed of both low and high frequencies, these faces change identity as we move closer or farther away.
Like sound, the visual world is made of varying spatial frequencies. Spatial frequency is analogous to the more familiar concept of sound frequency or pitch. A high frequency sound has many vibrations per second (a violin), while a low frequency sound has few vibrations per second (a bass). Similarly, a high frequency visual pattern has many abrupt changes in brightness or colour across space (the grass in the image below) while a low frequency visual pattern has only gradual changes (the clouds).
At large viewing distances, high frequency information cannot be resolved by our visual system while low frequency information can (e.g., you may be able to see the colour of a sign in the distance, but you cannot read what is written on it). When we are far from an object, we can only see the low frequencies. When we are near to an object, we can see both the high and low frequencies. But high frequency information tends to be a more valid indicator of important object details and boundaries, so those frequencies dominate our perception when they are resolvable.
Dr. Philippe Schyns (University of Glasgow) and Dr. Aude Oliva (Massachusetts Institute of Technology - MIT) have created hybrid images that illustrate these two perceptual phenomena. The image below is a combination of the high frequency part of Albert Einstein’s face and the low frequency part of Marilyn Monroe’s face. As you view this image at close range, the high frequency information (Einstein) dominates our perception. As you move farther away (which we simulate by making the image smaller), the high frequency information (Einstein) eventually becomes imperceptible and the low frequency information (Monroe) emerges perceptually.
The grey bars moving up and down are identical, but they appear to change in brightness as they move. Our perception of brightness is not absolute, but is strongly influenced by the surroundings. In this phenomenon, our perception of the grey bars changes when they are sandwiched between the white and the black bars. This effect, known as White's Illusion, was discovered in 1979 by Dr. Michael White, an Australian psychologist.
While reading a book on optical art, Dr. Michael White stumbled upon a design by an 11th grade student. The design consisted of black, white, and grey elements with two sets of grey segments, one interleaved among black elements and one interleaved among white elements. Although the grey segments were identical in lightness, one set of segments appeared much darker than the other. The accompanying text did not offer an explanation and Dr. White set out to understand this phenomenon. Nearly 40 years later, it is still somewhat of a puzzle.
Several theories have been put forth to explain White’s illusion, but to date there is little consensus. One theory posits that a grey bar sandwiched between two uninterrupted white stripes “assimilates” to the brighter stripes and thus appears brighter. Similarly, a grey bar sandwiched between two uninterrupted black stripes “assimilates” to the darker stripes and thus appears darker. Another theory notes that the grey bars appear to form a solid shape that either lies in front of or behind the white and black stripes, thus giving rise to an appearance of transparency which in turn influences our interpretation of the lightness of the grey bars. Other theories provide explanations that rely on specific stages of visual processing. Despite the lack of consensus, White’s illusion has become an important part of testing models of our understanding of visual perception.
- aic-colour-journal.org/index.php/JAIC/article/download/15/13 Written by: Marty Banks and Hany Farid, March 2015