Eye Strips Images Of All But Bare Essentials Before Sending Visual Information To Brain, UC Berkeley Research Shows
The eye as a camera has been a powerful metaphor for poets and scientists alike, implying that the eye provides the brain with detailed snapshots that form the basis for our rich experience of the world.
Recent studies at the University of California, Berkeley, however, show that the metaphor is more poetic than real. What the eye sends to the brain are mere outlines of the visual world, sketchy impressions that make our vivid visual experience all the more amazing.
"Even though we think we see the world so fully, what we are receiving is really just hints, edges in space and time," said Frank S. Werblin, professor of molecular and cell biology in the College of Letters & Science at UC Berkeley. Werblin is part of UC Berkeley's Health Sciences Initiative, a collaboration among researchers throughout the campus to tackle some of today's major health problems.
The brain interprets this sparse information, probably merging it with images from memory, to create the world we know, he said.
In a paper in the March 29 issue of Nature, doctoral student Botond Roska, M.D., and Werblin provide evidence for between 10 and 12 output channels from the eye to the brain, each carrying a different, stripped-down representation of the visual world.
"These 12 pictures of the world constitute all the information we will ever have about what's out there, and from these 12 pictures, which are so sparse, we reconstruct the richness of the visual world," Werblin said. "I'm curious how nature selected these 12 simple movies and how it can be that they are sufficient to provide us with all the information we seem to need."
While scientists have known that the eye forwards several parallel representations of the world to the brain, what these are and how they are produced has been a mystery.
"What we have done," Roska said, "is show that the retina creates a stack of image representations, how these image representations are formed and that they are the result of cross-talk between layers of cells in the retina."
The results are a big step toward producing a bionic eye employing a unique computer chip that can be programmed to do visual processing just like the retina. The chip, called a Cellular Neural Network (CNN) Universal Machine, was invented in 1992 by Roska's father, Tamás Roska, and Leon O. Chua, a professor of electrical engineering and computer sciences at UC Berkeley.
"The biology we are learning is going into improving the chip, which is getting more and more similar to the mammalian retina," Roska said. "Nevertheless, a bionic eye is a far-fetched notion until someone figures out how to connect it to the neural circuitry of the brain."
Over a period of nearly three years, Roska painstakingly measured signals from more than 200 ganglion cells in the rabbit retina as he flashed pictures of a featureless square or circle. Ganglion cells are the eye's output cells, forming the optic nerve connecting it to the brain.
"We made very simple measurements on retinal cells, recording excitation and spiking when we flashed squares and moving spots in front of the eye," Roska said.
From these, he and Werblin determined that there are about a dozen different populations of ganglion cells, each spanning the full visual space and producing a different movie output.
One group of ganglion cells, for example, only sends signals when it detects a moving edge. Another group fires only after a stimulus stops. Another sees large uniform areas, yet another only the area surrounding a figure.
"Each representation emphasizes a different feature of the visual world an edge, a blob, movement and sends the information along different paths to the brain," Werblin said.
The two researchers shared these detailed findings with software designer David Balya in Hungary, who modeled the visual processing on a computer, a preliminary step before actually programming a CNN chip to simulate the image processing that goes on in the eye. The computer model precisely mimics the output of the ganglion cells of the retina, vividly showing the difference between the world we see and the information that actually is sent to the brain.
"We now are looking at the predictions the model makes when viewing natural scenes Frank's face or leaves on the ground and comparing them with what we measure in actual retinal cells, to learn how good these predictions are," Roska said.
Though scientists realize that the eye is not merely a camera providing digital input to the brain, the general consensus has been that the world projected onto the retina and detected by cells called photoreceptors got sent to the brain after some relatively simple processing.
Roska and Werblin showed that retinal cells do a lot of processing to extract only the essence of the picture to send to the brain. The anatomy of the retina is layered to facilitate this.
Light initially impinges on the light-sensitive cells of the eye, the photoreceptors, which fire off signals to a layer of horizontal cells and thence to bipolar cells. Since 1969, Werblin has been recording from all retinal cells and has detailed how each cell type processes data from the photoreceptors.
The bipolar cells funnel signals down their axons the outgoing wires of the nerve cell and relay them to the dendrites or input wires of ganglion cells, which send the processed information to the brain. All these cell types are arrayed in unique layers, stacked one atop the other.
Biologists noted earlier that all ganglion cells were not alike and that they fired off different information to the brain, though the details were hazy. Part of the reason is that the axons from the bipolar cells synapse with or touch the dendrites of the ganglion cells in a tangled region (the inner plexiform layer) that made biologists despair of making sense of the connections.
Roska discovered, however, that this region of tangled axons and dendrites is really laid out in orderly strata. By staining the cells from which he recorded, he found that bipolar cell axons converge on 12 or so well-defined layers, where they synapse with the dendrites of the ganglion cells. Each layer of dendrites belongs to a specific population of ganglion cells.
Without interaction between layers, though, the signal emerging from the tangle would not be much different from the original 12-channel output of the bipolar cells. The critical element is another type of cell, the amacrine cells, which send processes to the various layers of dendrites and allow the layers to talk with one another. This cross-talk is what allows the layers to process the visual data and extract the sparse information that the ganglion cells send up to the brain.
"Previously, when people studied ganglion cells, they would look at the cell and flash lights. One of Botond's major contributions to this was, he thought about this not as the cell, but as the layer of processes from which the cell is reading. So, we began to think in terms of layers, and all of the activity we measured corresponded to what happened in a particular layer," Werblin explained. "Then it became clear that these layers were actually talking to each other. Previously no one had even thought that these layers talked to one another, even though 100 years ago the picture was there. No one had really looked at that picture."
The work was supported by grants from the Office of Naval Research and the National Institutes of Health.
Source: University Of California, Berkeley. April 2001.
rating: 3.00 from 2 votes | updated on: 1 Jun 2007 | views: 1689 |