Learning Objectives
By the end of this section, you should be able to
- 2.2.1 Describe the rhythmic behavior produced by the simple swim circuit in Tritonia diomedea.
- 2.2.2 Define some of the key features of neural circuits: parallel processing, feedback, efficiency, and a careful balance between excitation and inhibition.
- 2.2.3 Describe the work of computational neuroscientists to build mathematical models of neurons and neural circuits.
As specialists in communication, neurons do not work on their own, but in interconnected groups, often called a neural circuit or a neural network. Even small numbers of neurons can generate remarkably complex behavior. To see this, let’s examine a very simple neural circuit: the “swim” network in the sea slug Tritonia diomedea.
In the swim circuit of Tritonia diomedea a few neurons produce rhythmic behavior important for survival
Tritonia is a species of slimy mollusk that glides along the ocean floor off the west coast of the United States and Canada. The mortal enemy of a Tritonia is the Pacific sea star, a voracious predator that loves to dine on Tritonia. To avoid this fate, a Tritonia “swims” away if it feels the touch of a sea star. Well, actually the Tritonia kind of thrashes about, arching its back, then relaxing, then arching again, in a rhythmic motion that helps it move up into the ocean current. When the Tritonia stops swimming, it sinks back down to the ocean floor, hopefully far away from the hungry sea star. Check out a video of a Tritonia escaping a sea star here:
Researchers have found that Tritonia swim away from danger with the help of a relatively simple neural circuit (Willows and Hoyle, 1969; Getting, 1983), consisting of sensory neurons, motor neurons, a partner neuron, and an inhibitory neuron.
The sensory neurons in the skin are tuned to specific chemicals on the tentacles of the Pacific sea star. When these are detected, the sensory neurons become very excited, firing a long-lasting barrage of action potentials (Step 2 in Figure 2.9; each hash line represents an action potential). The sensory neurons release excitatory transmitter onto a set of 3 motor neurons which control the muscles of the back. When these motor neurons reach threshold they fire action potentials, contracting the back muscles to produce the arching movement that forms the first half of the ‘swim’ rhythm (a behavior that is cyclical or periodic).
This is not all the motor neurons do. They also release excitatory transmitter onto a partner neuron, which in turn excites an inhibitory neuron (Step 3 in Figure 2.9). This inhibitory neuron provides feedback to the circuit, inhibiting the motor neurons. When this happens, the inhibition overwhelms the excitatory input from the sensory neurons, and the motor neurons are pushed below threshold. With the motor neurons inactive, the back muscles relax and the Tritonia straightens out. At this point, it has arched back and then relaxed forward, completing a ‘swim’ rhythm.
Can you predict what will happen next? Stop and think about it for a moment. If you predicted that the whole cycle would begin again, you’re spot on: with the motor neurons inactive, the partner neuron and, in turn, the inhibitory neuron are no longer being excited, so they stop firing action potentials and the motor neuron is released from inhibition. That means the long-lasting activity in the sensory neurons is again able to activate the motor neurons, producing another arching of the back (Step 4), followed by another round of inhibition that again relaxes the animal forward (Step 5 in Figure 2.9). The whole cycle repeats over and over again until the sensory neurons stop firing, which usually takes 10 to 30 seconds (far longer than the brief touch of the predator). Thus, just a few neurons in the Tritonia nervous system transform an outside event (a sea star touch) into a complex and long-lasting rhythm that ‘swims’ it away from danger.
The Tritonia swim network is not only rhythmic; it is also dynamic, changing based on the Tritonia’s experience. For example, if a Tritonia is injured, the sensory neurons become hyper-excitable, firing at a lower threshold and for longer times. This shifts the Tritonia to be more likely to swim and to swim more vigorously, hopefully protecting it from further injury (but also using precious energy). The swim network can also shift in the other direction. If a Tritonia is gently touched over and over again, the sensory neurons produce smaller and smaller EPSPs, and the network begins to ignore this gentle touch, learning that it does not pose a real danger (Brown, 1998; Hoppe, 1998). The dynamic nature of the swim network helps the Tritonia strategically allocate its energy based on its experiences—to swim away from real danger while also saving energy by ignoring events that are innocuous (meaning harmless or non-threatening).
Even though the Tritonia swim circuit is simple, it can still malfunction. If the connection to the inhibitory neuron is weakened, there will not be enough inhibition to pause activity in the circuit, and instead the circuit will produce constant activity that could lock up the back muscles, freezing the Tritonia in an arched position that will make it into an easy dinner. Too much inhibition is also problematic. If the inhibitory neuron releases too much neurotransmitter, the circuit would pause for too long, letting the animal drop back down to the ocean floor before it has managed to get away from the sea star. Generating the swim rhythm that keeps a Tritonia safe requires just the right balance between excitation and inhibition to keep the circuit moderately but not excessively activated (Katz and Frost, 1997; Calin-Jageman et al., 2007).
Studying neural circuits reveals important principles about how nervous systems generate behavior
You probably never thought about the swimming behaviors of slimy mollusks before, but hopefully you found it exciting (pun intended!) to learn how just a few neurons can produce a life-saving behavior. One of the key goals of neuroscience is to uncover the secrets of more complex neural networks, such as the ones operating in your nervous system to produce language, emotions, and thought. There is still a lot we do not know, but even from the simple swim circuit of Tritonia we can glean a few key insights (Figure 2.10):
Neural networks feature parallel processing, meaning that information spreads along multiple pathways. In the Tritonia swim network, the motor neurons form synapses onto the muscles and onto a partner neuron that activates inhibition, sending messages along two distinct pathways at the same time.
Feedback, where a neuron influences the inputs it will later receive, makes even seemingly simple networks capable of producing complex patterns of activity.
Neural networks are often rhythmic or cyclical, exhibiting repeating patterns of activity and inactivity. This is reflected in the fact that many of our behaviors are also rhythmic: walking, sleep/wake cycles, breathing, and more. Chapter 15 Biological Rhythms and Sleep digs into detail on these fascinating cycles of neural activity.
Neural circuits can malfunction. They operate best at moderate levels of activity. They can easily be overwhelmed with excitation (which shows up in behavior as seizures or muscle spasticity) or inhibition (which shows up in behavior as torpor or muscle flaccidity). To work well, networks need both inhibition and excitation, and in the right balance.
Neural networks are highly efficient. Tritonia can swim away from danger with a network of less than a dozen neurons. Even with the large numbers of neurons in the mammalian nervous system, the efficiency of operation is incredible. Your entire brain uses about 20 watts of electrical power (Balasubramanian, 2021). For comparison, a modern Xbox or Playstation draws up to 160 watts of power. It is true that your brain uses a large fraction of your daily energy budget (about 500 kilocalories per day, or 25% of a typical 2,000 kilocalorie daily energy budget; Herculano-Houzel, 2012), but nervous systems are still remarkably efficient relative to the electronics around us.
What makes neural networks even more incredible is that they are self-assembled, following genetic and environmental signals to create and maintain the functioning of the network. How, exactly, this happens is still deeply mysterious. Chapter 5 Neurodevelopment explains some of what we’ve learned about how neural circuits assemble.
Neuroscience in the Lab
Computational neuroscience
Scientists frequently express and check our understanding of natural phenomena by creating models and comparing our models to reality. This is the guiding principle for computational neuroscience, a diverse subfield of neuroscience dedicated to developing and exploring mathematical models of neurons and neural networks. Computational neuroscientists simulate neurons, meaning that they specify a set of mathematical rules to stand in for a real neuron, and then use computers to repeatedly apply those rules, producing data on how their models would perform under different conditions.
Some simulations use very simple models of neurons. For example, in an integrate-and-fire model, each “neuron” is represented in the computer as a set of inputs and a threshold, and there is just one simple rule: if the sum of a neuron’s inputs is greater than its threshold it fires, sending a temporary input to its partners; otherwise, it stays silent. Even such simple simulations of neurons are capable of producing complex behaviors that can mimic the operation of real nervous systems, to some extent. Computational neuroscientists have also developed highly detailed models of neurons, where a complete 3-d reconstruction of a real neuron is simulated, sometimes even down to the level of individual molecules. Different levels of abstraction allow computational neuroscientists to test ideas about what aspects of neurons are especially important for different functions of the nervous system.
Many computational neuroscientists explore neural simulations simply to better understand the brain and to generate predictions which can then be tested experimentally. In addition, simulated neural networks have many practical applications. In fact, every time you ask Siri or Google to play some music, a simulated neural network is used to convert your spoken command into a text-based representation of your music choice that can then be played on your smartphone. Artificial neural networks have also become key technologies for processing images (that’s how you can search for ‘cute dogs’ in your photostream) and are at the heart of AI technology like ChatGPT.
As computational neuroscientists become even more skilled at simulating neurons, they have begun exploring using their simulations to replace or repair parts of living nervous systems. For example, Rosa Chan at Hong Kong University is one member of a large team of collaborators who have been working to develop a neural prosthetic, a device that could replace or repair a part of the nervous system. In one set of studies, the researchers implanted a rat with electrodes to record some of the inputs and outputs to its hippocampus as it completed a memory task (Berger et al., 2011; Deadwyler et al., 2013). The researchers analyzed how the rat’s real hippocampus works, and used these recordings to fine-tune a simulated hippocampus, tweaking it so that when given the real inputs from the rat their simulation generated similar outputs. This “simulated hippocampus” is diagrammed in the top of Figure 2.11.
To test the simulation, the researchers measured the rat’s ability to complete a memory task that involves the hippocampus (bottom of Figure 2.11). Under normal circumstances, the rats performed well (green bar in Figure 2.11). Next, the researchers temporarily shut down processing in the real hippocampus by cooling it, an intervention that disrupts activity in the hippocampus but not the inputs coming into it. When this happened, the rat began to fail the memory task (blue bar in Figure 2.11), as it no longer had help processing new memories from the hippocampus. Finally, the researchers “replaced” the rat’s hippocampus with the simulation, feeding into it the inputs the real hippocampus should have been receiving and sending the simulation’s outputs back into the rat’s brain. Amazingly, the rat began to succeed at the memory task again (magenta bar in Figure 2.11), though not quite with the same accuracy as when its real hippocampus was available. Preliminary testing of this type of system is now underway with humans (Hampson et al., 2018). This incredible achievement takes us back to this foundational idea in computational modeling: the researcher’s ability to simulate a hippocampus shows that they have understood something essential about what the hippocampus actually does. It’s also an inspiring invitation into computational neuroscience: who knows what you could achieve or do by using computers to simulate the incredible power of neurons?