Learning Objectives
By the end of this section, you should be able to
- 6.4.1 Describe how visual pathways divide visual information from left and right visual fields for projection to right and left cortical areas, respectively
- 6.4.2 Define the optimal stimuli for simple and complex cortical cells
- 6.4.3 Describe how LGN receptive fields combine within the receptive field of simple cortical cells to drive simple cell activity
- 6.4.4 Describe how simple cortical cell receptive fields combine within the receptive field of complex cortical cells to drive complex cell activity
- 6.4.5 Describe the retinotopic map of the primary visual cortex
- 6.4.6 Describe the functional architecture of the primary visual cortex, including ocular dominance columns, cytochrome oxidase blobs, and orientation pinwheels
- 6.4.7 List the different types of stimuli that can excite primary visual cortical cells, including their combination with color
From the retina, the main visual pathway for conscious perception goes to the lateral geniculate nucleus of the thalamus (LGN), after which LGN neurons project to the primary visual cortex. Visual information then goes to a series of extrastriate visual areas, the inferotemporal cortex, and a number of other brain regions involved in identifying, localizing, and interacting with visually perceived objects. In this section, we will sample several of these areas to see how neurons code visual information, eventually reaching neurons that are selectively responsive to objects and faces located anywhere in the visual field.
Visual Fields and the Visual Pathway
Figure 6.19 shows the first stages of the visual pathway, going from the eye through the thalamus to the primary visual cortex.
Each eye is turned to project the fixation point between the left and right visual fields onto the fovea in the center of the retina. Because lenses invert images, the left visual field (shown in green) is imaged on the right side of each retina, and the right visual field (purple/blue) is imaged on the left side. Ganglion cell axons from both sides of the retina bundle together to form the optic nerve, which leads to the optic chiasm in the midline. There, the nasal (inner) half of the axons from each retina cross over, while the temporal (outer) half remains on its side of origin. Consequently, the right side of the brain (green) receives axons from the right half of each retina, and the left brain (purple) receives axons from the left retinas. Because the left retina views the right visual field, each hemisphere of the brain receives information about the opposite side of the visual world from both eyes. Lesions in the visual pathway on one side of the brain lead to deficits in seeing the opposite side of the visual world.
Lateral Geniculate Nucleus
The main cerebral target for the retinal ganglion cell axons is the lateral geniculate nucleus (LGN), a structure in the thalamus on the path to the primary visual cortex. The LGN is a six-layered structure. The left side of Figure 6.20 shows the LGN in a post-mortem monkey brain. The dark purple dots are cell bodies and show how distinct these six layers are. Each of these layers is unique in the input it receives. Two layers are composed of large neurons, magnocellular layers 1 and 2, and four layers are composed of smaller neurons, the parvocellular layers 3-6. Magnocellular retinal ganglion cells, which convey low resolution information and respond well to motion, send their axons to the magnocellular LGN layers. Parvocellular ganglion cells with small receptive fields, red/green opponent color responses, and high resolution connect to the parvocellular layers. There are also small intermediate koniocellular LGN layers, which were overlooked in early studies. They are the target for axons from blue/yellow ganglion cells. The neurons in each LGN layer are arranged in a two-dimensional array that matches the ganglion cell locations in the retina. This preserves a map of receptive field positions, a retinotopic map.
Although ganglion cells from the right half of each eye project to the LGN on the right side of the brain, the ganglion cells from each eye connect to different layers of the LGN. Thus, information from the two eyes remains separate in the LGN, and neurons driven by both eyes will not appear in the visual pathway until the next stage, the primary visual cortex.
V1 Simple, Complex and “Hypercomplex” Neurons
LGN neurons that receive input from retinal ganglion cells send their axons to the primary visual cortex (V1), a large visual area in the posterior occipital lobe and midline. This area is also called “striate” cortex because cross sections appear striated, or striped, and it is also sometimes referred to as “area 17” in reference to labels for different cortical areas based on their appearance. V1 is the first stage of visual processing in the cortex. In a collaboration that began in 1958, David Hubel and Torsten Wiesel conducted an extensive research program to discover how V1 cells respond to visual stimuli (more about them and their experiments can be found later in this section). By recording from individual cortical neurons in experiments that often lasted for more than 24 hours, they showed that the cortex could be understood at the level of individual neurons, and that the neurons had receptive fields responsive to edges. This continued earlier stages in the perception of objects and scenes. Hubel and Wiesel’s experimental method was to advance an insulated needle electrode into the visual cortex to pick up extracellular action potentials from neurons along the electrode’s path (see Methods: Electrophysiology). They projected visual stimuli on a screen facing the anesthetized animal (initially cats and later monkeys), while they searched for the most effective visual stimulus. By listening to the neuron’s action potentials as they altered the projected stimulus, they mapped the neuron’s receptive field: the visual pattern that optimally excited the cell. Once they characterized a neuron’s receptive field, they advanced the electrode until it encountered another neuron, which they then mapped. In this way, they gathered data on the receptive fields of hundreds of V1 neurons in each experiment.
In addition to mapping receptive fields, they also kept data on the location of each neuron along the electrode’s track, so they could later reconstruct the positions of the mapped neurons on slices of the post-mortem brain. In this way, they not only were able to classify V1 neurons into functional categories, but they also revealed the functional architecture of the primary visual cortex.
The earliest experiments recording from individual neurons in V1 used stimuli that had been effective in exciting or inhibiting retinal ganglion and LGN cells: small light or dark spots. Many cortical cells could be stimulated and mapped with small spots. But unlike the earlier neurons in the visual pathway, where receptive fields were circular with opposing center and surround regions, the cortical receptive fields had elongated excitatory and inhibitory areas organized along straight-line edges. Hubel and Wiesel found that bright bars, dark bars, or an edge between light and dark were even more effective than spots as stimuli. For a vigorous response, the straight-line stimulus had to be at an appropriate angle of orientation aligned with the receptive field edge, and positioned exactly within the receptive field (Figure 6.21). These first cortical cells with receptive fields that could be mapped with small spots they called “simple cells.”
Hubel and Wiesel also discovered a second group of cortical cells that could not be stimulated with small spots. They named these cells complex cells. At first, they were a puzzle to the experimenters. Hubel has told the story of how they accidentally discovered what was special about complex cells. After struggling for hours without finding an effective stimulus for the neuron they were recording from, they slid a glass slide out of the projector, and as the faint dark edge of the slide moved across the screen, the neuron suddenly fired vigorously. It soon became clear that complex cells would respond to edges but not to spots. Their receptive fields could not be divided into static excitatory and inhibitory areas. Instead, complex cells responded to an appropriately oriented straight-line edge at any position in the receptive field. If the angled light or dark bar swept across the receptive field, the cell would respond continuously. (Figure 6.22) Many complex cells were also directionally selective, responding to an edge moving in one direction but not the reverse. This was different from simple cells, where an appropriately oriented edge would be effective at only one exact position; complex cells would respond to the stimulus at any position. This seemed to be a step beyond simple cells, hence the name “complex.”
Hubel and Wiesel recorded videos that showed how they mapped receptive fields of simple and complex cells. The videos show the screen in front of the animal on which stimuli are projected, and we hear the action potentials (as clicks) produced by the neuron they are mapping. Although the video quality is not the best, these historically important videos show how V1 neurons respond selectively to projected visual stimuli.
Other neuroscientists explored responses of V1 neurons to other visual stimuli in addition to the projected bright or dark bars and edges that Hubel and Wiesel used. One group of stimuli, which are generated on a computer screen, are sinusoidal luminance gratings. These look like soft-edged stripes, and are discussed in the Feature Box on spatial frequency selectivity. Theoretically, any image can be broken down into its component spatial frequencies, which makes these stimuli of particular interest. V1 neurons actually respond more vigorously to appropriately oriented spatial frequency gratings than they do to simple bars or edges, and a neuron’s preferred spatial frequency is now considered a significant characteristic of its receptive field.
Building Simple and Complex Receptive Fields
Hubel and Wiesel were interested in how information from LGN neurons with center-surround receptive fields could be transformed into the elongated receptive fields of cortical cells. They proposed that a simple cell in V1 receives excitatory synaptic input from a group of LGN neurons that have overlapping receptive fields located in a staggered straight line. Activating any one of the LGN cells with an appropriate light or dark spot would slightly excite the simple cell, but activating all of the LGN neurons with an oriented edge that covered all their receptive field centers would excite all of the LGN cells. This would elicit a vigorous response from the simple cell. Later experiments that recorded simultaneously from LGN and V1 neurons confirmed this scheme. Figure 6.23 shows the proposed cellular anatomy of these connections on the left, with multiple LGN neurons converging to excite a single V1 simple cell. The receptive field perspective on the right shows how a bar of light would provide the most effective stimulus for a V1 simple cell receptive field.
Hubel and Wiesel further proposed that complex cells were excited by the activity of a group of simple cells that had overlapping but similar receptive fields (Figure 6.24). An edge at an appropriate angle and position would excite one or more of the simple cells, thereby exciting the complex cell. But the stimulus could be at any position in the receptive field and still excite the complex cell. A moving edge crossing the receptive field would lead to continuous excitation. Directional selectivity could be imposed by additional circuitry that enhanced the complex cell’s response to movement in one direction but suppressed the response to the opposite direction.
As Hubel and Wiesel continued their exploration of complex cells’ responses, they found an additional characteristic. Making an appropriate stimulus bar longer so it spilled beyond the receptive field had no effect on the responses of many complex cells, but it decreased the response of other complex cells (Figure 6.25). This effect, where a stimulus bar extending beyond the receptive field actually inhibits a cortical cell’s response, is referred to as endstopping. They named the new category “hypercomplex cells.” Later research suggested that endstopping was a variable characteristic of every complex cell, and most neuroscientists no longer regard hypercomplex cells as a separate category. Figure 6.25 shows examples of complex cells with weak endstopping (top) and strong endstopping (bottom). Notice how both examples respond more and more to a stimulus as it elongates to fill the receptive field, but one stops responding when the stimulus extends beyond the receptive field. That complex cell shows endstopping in action.
The functional purpose of endstopping has been puzzling, but one possible explanation is that it would make complex cells respond selectively to curved edges. A long curve falling in the receptive field would excite an end-stopped neuron if the segment within the receptive field had an appropriate orientation angle, but the part of the curve falling outside the receptive field would have a different angle that would not trigger endstopping. In contrast, long straight lines would retain the same orientation angle outside the receptive field and would activate inhibitory endstopping.
Spatial Frequency Selectivity
Neurons in V1 respond selectively to sinusoidal luminance gratings, which is yet another characteristic of a neuron’s receptive field. These gratings look like evenly spaced stripes with soft edges. Three examples are shown at the top of Figure 6.26. They are called “sinusoidal” because the grating’s brightness changes from light to dark and back again with a profile of intensity that resembles a sine wave. Gratings are specified by how many stripes fit in one degree of visual angle. Low spatial frequency gratings have broad stripes and appear blobby, while high spatial frequencies have narrow stripes and convey fine detail.
Selectivity for spatial frequency seems important because any real image can be computationally decomposed into its component spatial frequencies (“Fourier analysis”). The street scenes in the middle of the figure illustrate the contributions of low and high spatial frequencies to a real image. The original image (left) contains a full range of spatial frequencies, but if just the low spatial frequencies are presented (center), the scene appears blobby and out of focus. Alternatively, the high spatial frequencies show the scene’s edges and reveal fine detail (right).
Interestingly, if a V1 neuron is stimulated with a series of appropriately oriented sinusoidal luminance gratings, the neuron will respond best to a particular spatial frequency and the response will fall off sharply at non-optimal frequencies. This makes the neuron narrowly tuned to its best spatial frequency. If instead the same neuron is stimulated with bars of different widths, the most effective bar does not elicit as strong a response as the best grating, and the tuning is not very sharp (narrower bars continue to excite the neuron). This is shown at the right of Figure 6.26, which plots relative sensitivity vs. spatial frequency. In their studies of V1 neurons, Hubel and Wiesel displayed rectangular bars of light from a slide projector on a screen in front of the animal, but most experimenters now use sinusoidal luminance gratings on a computer screen.
Binocular Units
We noted earlier that information from the right side of each retina ends up on the right side of the brain, but axons from the two retinas synapse in separate layers of the lateral geniculate nucleus. This means that LGN neurons projecting to the primary visual cortex will be driven by one eye or the other but not both. In V1, however, binocular neurons are found that are driven by both eyes (Figure 6.27). Covering one eye and then the other while the animal views the stimulus screen shows whether the V1 neuron responds best to the left eye, the right eye, or often to both eyes together. If one or the other eye is more effective than the other alone, this is classified as “ocular dominance,” yet another characteristic of a V1 neuron’s receptive field.
It was also possible to demonstrate that these binocular neurons contribute to stereo vision. For some neurons, moving an appropriately angled edge in space in front of the animal generated a strong response only if the stimulus was at a particular distance. Moving it nearer or farther decreased the neuron’s response. Such neurons would report on the depth of edges in the visual scene. You can demonstrate stereoscopic depth perception in your own vision by moving a finger on your outstretched arm close to you and far away while closing one eye and then the other. You automatically perceive the distance of your finger as its relative position on your two retinas changes, affecting depth-sensitive neurons in your visual system. But automatic stereo vision operates only for near distances. For outdoor scenes, for example, there are other cues to depth such as occlusion (which objects are in front of others) and atmospheric haze, while relative positions of distant objects on our two retinas are no longer significant.
People Behind the Science: David Hubel and Torsten Wiesel
Hubel and Wiesel began their collaboration at Johns Hopkins University in 1958 working in the laboratory of Steven Kuffler, who had discovered receptive fields of retinal ganglion cells. Their goal was to record from neurons in the visual cortex. Both were immigrants to the US, Hubel from Montreal Canada and Wiesel from Sweden. Their experiments employed an electrode that Hubel had designed, a stiff tungsten wire sharpened to a pointed tip and insulated except for the tip. A miniature hydraulic drive bolted to the animal’s skull held the electrode and allowed it to be slowly advanced into the cortical tissue, where it recorded extracellular action potentials from neurons. At first, they used the stimulus arrangement that Kuffler had used to map the receptive fields of ganglion cells, adapting it to record from the cortex by draping a bedsheet across the ceiling on which they could project spots of light. Hubel has described how awkward this was, and they soon switched to the arrangement they subsequently used for all future experiments. An anesthetized cat faced a screen on which the visual stimuli were projected, while the experimenters listened to action potentials from a neuron as they attempted to find the optimal visual stimulus to evoke a maximal response. Listening to neuronal activity provided an effective way of monitoring responses. The electrical signal was amplified and sent to a loudspeaker, where the action potentials sound like clicks. This made it easy to detect when a neuron is responding well. You can experience Hubel and Wiesel’s method of mapping receptive fields in a series of videos that they made to demonstrate their work. Although the 1960s image quality is poor by modern standards, these historically important videos let you hear a V1 neuron’s action potentials while you see the projection screen as the experimenters search for the best stimulus.
When Kuffler soon moved to Harvard, Hubel and Wiesel moved with him, and began experiments on cats and later monkeys that revealed cortical neurons that detect edges and boundaries. In addition to classifying different types of neurons by their responses to visual patterns, they also characterized the cortex’s functional anatomy, revolutionary experiments that greatly advanced our understanding of the cerebral cortex and led to further discoveries.
An appreciative account of Hubel and Wiesel’s work was published 50 years after their first paper: (Wurtz, 2009). Hubel's Nobel Prize address is also available; it is a very readable, informal account of their work. An online video of his Nobel lecture shows Hubel’s casual speaking style.
Functional Anatomy of V1
The orderly layout of the retina is preserved in the layers of the lateral geniculate nucleus and in the surface of the primary visual cortex. This is the retinotopic map, shown in the left side of Figure 6.28. The right visual world is represented in the left visual cortex, with an orderly arrangement of receptive fields from the central retina to the periphery. Because there are many more retinal ganglion cells and LGN cells serving the fovea and central retina compared to the periphery, a disproportionately large area of cortex is devoted to the center of the visual field.
The retinotopic map was established using several methods, including locating visual deficits (scotomas) for soldiers in World War II who had suffered head wounds that lodged shrapnel in the visual cortex. This established the overall layout of the human retinotopic map by presenting the average position in the visual world of the receptive fields of neurons from different cortical locations
Hubel and Wiesel added fine detail to the map by recording from individual neurons. A representation of their findings is shown on the right side of Figure 6.28. The receptive fields of nearby neurons overlap around their average position in visual space, but if the electrode is moved about 3 mm across the cortical surface, the receptive fields of neurons in the new location no longer overlap with receptive fields for the earlier location.
In addition to mapping hundreds of receptive fields in their experiments on V1, Hubel and Wiesel also recorded data on the location of each neuron along the electrode’s track. This allowed them to later reconstruct the positions of the mapped neurons on slices of the post-mortem brain, which helped reveal the functional anatomy of the primary visual cortex. They noticed that as their electrode was advanced vertically (radially) deeper into the cortex, receptive fields of neurons along the electrode’s track shared the same orientation preference. A nearby radial penetration encountered neurons that shared a different orientation preference, indicating that the cortex was organized in vertical orientation columns. They also saw broad regions that shared ocular dominance, where one eye drove a binocular neuron more strongly than the other eye. This changed from one eye to the other if the electrode was advanced horizontally (tangentially) across the cortex. They concluded that the cortex was organized in vertical columns of neurons that shared orientation and ocular dominance preferences. Later researchers used a different technique, optical recording, to reveal the two-dimensional organization of orientation and ocular dominance columns in V1. Optical recording detects differences in blood supply that reflect the level of activity of nearby neurons. In this way it resembles fMRI, but it provides finer detail and requires creating a window in the skull to permit imaging of the cortical surface. By imaging the cortex while systematically presenting a series of oriented stimuli delivered to one eye or the other, regions of maximal activity could be identified. Using computational techniques to give each orientation preference a false color, the orientation columns were revealed to be arranged like pinwheels, while the ocular dominance columns were long stripes (Figure 6.29).
Another structural feature in the cortex had been identified earlier: cytochrome oxidase blobs in the upper layers of the cortex. Cytochrome oxidase is an enzyme associated with metabolic activity, and by using a substrate that leaves a colored reaction product, the cortex was shown to have a regular array of “blobs," small patches of neurons with high metabolic activity. The array of blobs across V1 seen in stained microscope sections became another architectural element to add to V1’s orientation and ocular dominance columns. Optical recording using achromatic (non-colored) striped gratings at various angles or gratings with alternating red and green stripes demonstrated that the cytochrome oxidase blobs coincide with regions that respond most strongly to color stimuli.
Putting all these features together created an overlapping map of orientation pinwheels, cytochrome oxidase blobs, and ocular dominance columns (shown in the bottom of Figure 6.29). The cortex has a repeating modular structure. Within each module, the blobs and the pinwheel centers are both aligned along the center of the ocular dominance columns, but they do not have a specific relation to each other. The overall implication is that as the visual cortex is built in development, a repeating subunit structure is genetically specified to organize the cortex.
The final aspect of cortical architecture to consider is cortical layers. V1 is named “striate” (striped) cortex because of the prominent layering seen in cross sections of stained cortex. There are six anatomical layers, numbered from 1 at the cortical surface to 6 adjoining the white matter. The layers turn out to have functional differences (Figure 6.30). Axons from the LGN arrive at the cortex in layer 4 and also in layer 6. Layer 4 is where simple cells are found (red dots), supporting the theory that simple cells receive direct synapses from LGN axons. Complex cells (blue) are found in Layers 2-3 and Layers 5 and 6, where they could receive connections from simple cells in the same column, again consistent with the theory that simple cells drive complex cells. Complex cells in the upper layers project to more advanced cortical areas, leading to the next stages of visual processing. Complex cells in layer 5 project to the superior colliculus, and cells in layer 6 project back to the LGN, modifying the LGN’s responses to signals from the retina.
Color Vision in the Cortex
Most neurons in V1 receive their input from the LGN’s parvocellular, color-selective layers, but surprisingly, relatively few V1 neurons respond selectively to colors. In monkey V1, one type of color-selective receptive field has a center-surround, double opponent receptive field. This is shown in the top of Figure 6.31. The example in the figure shows a neuron that is excited by red in the center and inhibited by the opponent color in the center (green). In the surround, the responses are reversed: green excites and red inhibits. A pattern where the two colors meet at an edge is the strongest stimulus.
Other primate V1 neurons include simple and complex cells that are selective for color stimuli. The simple cell in the middle of Figure 6.31 is excited by flashing a properly aligned and positioned red bar on a green background (top), but reversing the colors elicits an off response after the stimulus ends (bottom). The complex cell in the middle of Figure 6.31 is excited by a red-green border moving to the left, but not if the border moves to the right, and not if the colors are reversed.
An interesting complex cell (bottom of Figure 6.31) is strongly excited by an angled bar of light moving upward if the light is yellow, but not if the bar moves downward or if it is a color other than yellow. Note that white light does not excite this neuron, which means it would be overlooked in an experiment that did not test colored stimuli. These complex cells have all of the stimulus selectivities that we met earlier, such as orientation angle and movement direction, but in addition they are selective for color. Color selectivity in V1 is not well understood, and some aspects remain controversial. These and other V1 neurons provide the beginning of our visual perception of the form, color and location of objects, but V1 is just the first stage of visual processing. Processing continues in areas outside of the striate cortex: the extrastriate cortex.