Learning Objectives
By the end of this section, you should be able to
- 18.2.1 Describe two kinds of nonassociative learning.
- 18.2.2 Explain a typical classical conditioning experiment.
- 18.2.3 Describe the conditioned fear paradigm.
- 18.2.4 Explain a typical operant conditioning experiment.
As we learned in 18.1 Memory is Classified Based on Time Course and Type of Information Stored, long-term memories can be categorized as implicit or explicit. In this section, we will dive more deeply into how implicit memories are formed. In the next section, we will cover explicit memory formation.
Implicit memory can be divided into associative and nonassociative categories. These categories are distinguished by the number of stimuli involved in learning. Nonassociative learning (and therefore non-associative memory) involves learning information about one stimulus. Conversely, associative learning (generating associative memories) refers to a type of learning in which the relationship between two stimuli is learned.
Nonassociative learning
Nonassociative learning involves the presentation of a single stimulus either once or multiple times. There are two main types of nonassociative learning: habituation and sensitization (Figure 18.11). Habituation refers to a diminished response to a stimulus that has been presented multiple times. An example would be someone who moves to a busy city from a small town. At first, the traffic noise may keep the person awake at night. However, after a few days, the person no longer notices the noise and can fall asleep easily. Sensitization refers to an exaggerated response to a stimulus after it is presented multiple times. In order for sensitization to occur, the stimulus must be intense and/or unpleasant. An example of sensitization would be the repeated loud ringing of a phone getting more and more annoying as time passes.
Neuroscience across species: Mechanisms of sensitization in a sea slug
Non-associative learning is a relatively simple form of learning and much of what we know about the central circuits supporting it first came from study of an organism called Aplysia (Figure 18.12). Aplysia, also known as sea slugs, have only ~10,000 neurons in their nervous system. This relatively simple nervous system enabled researchers like Nobel prize winner Dr. Eric Kandel to study nonassociative learning and determine the specific circuit mechanisms that mediate it.
To study non-associative learning, Kandel took advantage of a defensive reflex that Aplysia show called the gill and siphon withdrawal reflex. The gill of Aplysia is a delicate tissue through which they exchange oxygen and is located on the ventral side of the animal. The siphon is a small tube, extending from the ventral side of the animal out the caudal side, and is used to flow water through the animal as it moves. When the siphon is touched, this causes the gill to retract, a reflex meant to protect the delicate gill from whatever unknown threat may be approaching from behind. This reflex can undergo both forms of plasticity: habituation and sensitization. In the case of habituation, repeatedly touching the siphon eventually results in a weaker response. To observe sensitization, researchers paired a touch of the siphon with a shock to the tail. After repeated pairings (or even after just one pairing with a particularly large shock), a touch to the siphon elicits an exaggerated gill withdrawal response. Kandel and colleagues studied these changes in reflexes to show that they relied on central changes in synaptic strength and not, for example, changes in muscle fatigue or sensory receptor sensitivity.
Figure 18.13 shows some of the underlying mechanism for how sensitization happens and provides an example of how Aplysia helped us define a simple learning circuit. Underlying this exaggerated response is a larger amplitude EPSP in the gill withdrawal motor neuron compared to the EPSP seen before sensitization. This larger EPSP results from increased neurotransmitter release from the presynaptic neuron, which in the case of the gill withdrawal reflex is the siphon sensory neuron.
Let’s walk through how this change occurs. The siphon has sensory neurons which connect directly to the gill motor neurons. At baseline, a touch of the siphon releases neurotransmitter on the gill motor neuron, exciting the gill motor neuron (an EPSP) and causing it to fire. The gill muscle contracts, and the gill withdraws. The synapse between the siphon sensory neuron and the gill motor neuron is not isolated, however. It receives other inputs, the most important for us here is from a serotonergic interneuron. That serotonergic interneuron releases serotonin on to receptors on the presynaptic siphon sensory terminal. The serotonergic neuron gets its input indirectly from tail sensory neurons. When the tail is shocked, that serotonergic neuron releases serotonin on the siphon sensory presynaptic terminal. The downstream G protein signaling cascades following serotonin receptor activation change the siphon sensory presynaptic terminal such that it will now release more neurotransmitter in response to single siphon axon action potential. The next touch of the siphon now releases more neurotransmitter, causing a larger EPSP in the motor neuron, more motor neuron action potentials and greater gill muscle contraction. The end result is the same siphon touch results in more gill withdrawal after sensitization by tail shock than it did before. You can read more about Eric Kandel’s discoveries at The Lasker Foundation.
Associative learning
Unlike habituation and sensitization, where response to a single stimulus changes as a result of experience, in associative learning, a relationship between two stimuli is learned by presenting the stimuli close together in time. There are 2 main types of associative learning: classical conditioning and operant conditioning.
Classical conditioning
Dr. Ivan Pavlov is a central figure in the history of associative learning. Pavlov was a Russian physiologist who won the Nobel prize in Physiology or Medicine in 1904 for his discoveries about the physiology of digestion. However, he is best known for discovering classical conditioning, a field that even bears his name as it is sometimes called “Pavlovian conditioning”. His discovery was purely accidental while he was studying the gastric system of dogs. He was interested in the amount of saliva dogs produced when presented with food vs. non-food items and discovered, not surprisingly, that dogs salivated when food was placed in front of them. However, he also made a curious observation: the dogs began salivating before the food was presented in response, for example, to hearing the footsteps of the research assistants coming down the hall to bring the food. These auditory stimuli now elicited salivation after being reliably paired with food presentation. He then tested other signals, including a ringing of a bell, to signal that the food was on its way and observed that no matter what the signal, the dogs would salivate in anticipation of the food, suggesting that they had learned the association between the food and the signal.
Figure 18.14 describes the process that Pavlov was uncovering in his work and that we now call classical conditioning. The food is referred to as the unconditioned stimulus, and salivation as the unconditioned response. These are physiological responses that are not learned but are innate. Conditioned stimuli, like Pavlov’s bell, are stimuli that signal and thus come to be associated with the unconditioned stimulus. Conditioned responses are the responses that are triggered by the conditioned stimulus after learning. In the case of the dogs, the conditioned response was salivation but only when it occurred after the conditioned stimulus. Thus, the conditioned response and unconditioned response are often the same physiological responses (salivation in this case) and are distinguished only by the timing of their occurrence, with conditioned responses happening after the conditioned stimulus but before the unconditioned stimulus is present.
While Pavlov’s work was transformative for the study of learning, the brain mechanisms remained a mystery at the time. In particular, the effort to localize memory to a specific part of the brain, much like speech can be localized to Broca’s area, has occupied researchers for decades and continues to be a focus of research today. This memory trace in the brain is sometimes referred to as an engram. Dr. Karl Lashley, for example, launched numerous studies in the early 1900s where he made extensive cortical lesions in rats, looking for which area held the engram. After repeatedly observing no behavioral deficit, Lashley concluded in frustration that the memory engram could not be found. In Lashley’s words: “I sometimes feel, in reviewing the evidence on the localization of the memory trace, that the necessary conclusion is that learning is just not possible”.
About one hundred years after Pavlov won the Nobel prize, however, Pavlov’s scientific “great-grandson”, Dr. Richard Thompson, discovered a specific brain region underlying a specific form of Pavlovian conditioning. Instead of salivating dogs, Thompson used blinking rabbits. He developed a paradigm, rabbit eyeblink conditioning, in which an airpuff to the eye (the unconditioned stimulus) triggers a reflexive eyeblink, the unconditioned response. If the airpuff is preceded by a tone, the conditioned stimulus, the rabbit eventually starts blinking to the tone, thus emitting a conditioned response. Instead of looking in the cortex like Lashley did, Thompson set out to discover the memory engram in subcortical structures. Thompson and his doctoral student, David McCormick, published a paper demonstrating that they had indeed localized the memory engram. Lesions of the ipsilateral dentate-interpositus nuclei of the cerebellum completely eliminated the learned eyeblink response (McCormick & Thompson 1984). Importantly, the lesioned rabbits were still able to produce an eyeblink in response to the puff of air, indicating that the motor capacity persisted, but that ability to learn the association did not. Consequently, this particular type of associative learning is supported by the cerebellum. However, other types of associative learning, such as conditioned fear, rely on a different network of brain regions.
Neuroscience in the lab: Fear learning as a special case of classical conditioning
Many forms of fear learning are forms of classical conditioning. Fear memory is an umbrella term for a number of paradigms in which there is association formed between a threatening stimulus and a neutral stimulus. Research with human subjects can rely on self-report to examine fear. However, how do we study fear using animal subjects? Although it is not possible to directly measure fear in experimental animals, we can rely on species-typical defense behaviors to give us a clue about what brain circuitry gives rise to the perception of threats and avoidance of danger. The visual systems of predators that hunt rodents are exquisitely sensitive to movement. Thus, one species-typical behavior of mice is to freeze to avoid predation. Researchers capitalize on the tendency of rodents to freeze when they feel threatened as a way to indirectly measure the animals’ emotional state.
Figure 18.15 shows two typical fear conditioning paradigms, both of which involve delivering a foot shock as the unconditioned stimulus in association with differing neutral stimuli. In both cases, freezing behavior is used to measure fear elicited by the formerly neutral stimuli. In cued fear conditioning, the shock is paired with a tone, which then becomes the conditioned stimulus. Similar to eyeblink conditioning described above, the previously-neutral tone comes to elicit the conditioned response, freezing. In contextual fear conditioning, the shock is delivered without a tone. As a result, the environment where the shock was delivered serves as the conditioned stimulus. The next time a rodent is placed in the shock context, it will freeze, reflecting that it learned and remembered the association of the context with unpleasant footshock. There is general consensus that all types of fear conditioning depend on the amygdala, while contextual fear conditioning also relies on the hippocampus (Izquierdo et al., 2016).
People Behind the Science: Steve Ramirez (inducing false memories)
Dr. Steve Ramirez, an assistant professor of Neuroscience at Boston University, is best known for his discovery alongside his colleague Dr. Xu Liu that memories can be engineered (i.e. artificially created). To do this, they first had to capture a memory in the brain by tagging neurons in the dentate gyrus subregion of the hippocampus that were active during memory formation. Using a combination of genetic and optogenetic tools, they tagged neurons that were active during fear conditioning with Channelrhodpsin-2 (see Methods: Optogenetics), which allowed them to activate this same set of neurons in a different environment. Contextual fear conditioning is quite specific. If a mouse undergoes contextual fear conditioning in one environment, it will only freeze in the shock context and not in a “safe” context where it has never been shocked. Liu and Ramirez found that when they reactivated the group of neurons in the “safe” context, the mice froze as if they were in the fear context (Liu et al., 2012). Thus, each memory activates a specific set of hippocampal neurons and can be recalled when that set of neurons is reactivated. They concluded that this set of neurons that are activated by a memory are the neural substrate for a memory engram. Dr. Ramirez has the goal of applying these findings to therapeutic interventions for disorders such as depression and post-traumatic stress disorder.
Operant conditioning
Unlike classical conditioning, which requires the association between the conditioned and unconditioned stimuli, operant conditioning requires the association of a voluntary behavior with a consequence. The most well-known experimental example of operant conditioning is training a rat to press a lever to get a food pellet. However, you might be more familiar with the same type of operant behavior when you train your pet dog to do tricks using positive reinforcement. Every time your dog does something that is desirable, you give him a treat (a reinforcement), making it more likely that the behavior will occur again. Punishments, conversely, are actions that decrease the likelihood that the behavior will occur again. Punishments are often confused with negative reinforcement, which is the removal of an undesirable stimulus and has the consequence of making a behavior more likely to occur. A common example of negative reinforcement is the use of alcohol to alleviate nervousness in a social situation. The alcohol mitigates the nervous feelings, thus making it more likely that the person will drink alcohol the next time they encounter a social situation. There can also be positive punishment in which something is added to the environment to decrease the behavior. For example, giving students extra homework when they fail to complete the assigned work. Figure 18.16 diagrams the different forms of operant conditioning and their effect on behavior.
A number of brain regions contribute to operant conditioning. One important brain region is the basal ganglia, especially the caudate and putamen, collectively known as the striatum (see Chapter 10 Motor Control). The striatum receives input from many cortical regions, most importantly to the motor cortex, and sends projections via the globus pallidus and substantia nigra to the thalamus, and ultimately back to those same cortical regions. This cortical-striatal system is thought to be critical for making associations between stimulus and response. In addition, the medial prefrontal cortex, amygdala, and the mesolimbic dopamine system (i.e. “reward pathway”) all contribute to producing goal-directed behaviors and stimulus-response behaviors (Rudy, 2008).