Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Behavioral Neuroscience

7.1 Acoustic Cues and Signals

Introduction to Behavioral Neuroscience7.1 Acoustic Cues and Signals

Learning Objectives

By the end of this section, you should be able to

  • 7.1.1 Identify auditory cues and signals and explain their behavioral relevance.
  • 7.1.2 Describe what an acoustic wave is and how it propagates through air and interacts with solid objects.

The world is full of sound, and almost every species of animal possesses the ability to sense and respond to sounds. Indeed, hearing is so important that several components of the auditory system have evolved not once but many times. In this section, we will explore what causes sound, first from the standpoint of function and then from the standpoint of physics. This will prepare you for the next section, which covers how sound is detected by the ear and transmitted to the brain.

Why is the sense of hearing important?

Like every other sensory system, the auditory system has evolved because it provides useful information about the objects and events in an animal’s environment. Animals that can accurately sense what is happening around them can respond in a way that increases their chances of survival and of successfully passing on their genes. Hearing is especially valuable to survival because sound can travel over long distances and around obstacles, allowing animals to make timely responses, for example by fleeing from a predator before it gets too close.

To further illustrate why hearing is such an important sense, consider the female bird pictured perching in the center of Figure 7.2. Sounds are coming from every direction. Some are not especially relevant, like the splash of a turtle falling into the water or the rippling of water flowing in the stream. Other sounds carry important information, like the rustle in the nearby bushes that could be coming from a predator, or the song of a nearby male advertising himself as a potential mate. Sounds that are produced unintentionally are called cues, whereas sounds produced intentionally, with the goal of communicating something, are called signals.

Some of the most common and important cues are the sounds animals make as they move. Predators and prey alike have evolved to move as quietly as possible to minimize the cues they give about their presence and location. It is almost impossible to avoid making any sounds, however. Sticks break, grasses rustle, and even simply moving through the air can create turbulence. As we will see, all of these events can generate acoustic waves that travel for tens of meters or even further. Just as there is selective pressure for animals to move quietly, many species have evolved extremely sophisticated and sensitive auditory systems. For example, the common house cat (Felis domesticus) can hear a mosquito flying in a quiet room from as far as 30 m away — no wonder my cat seems to appear from nowhere the moment I open a can of food!

A bird in a forest setting with soundwaves from surrounding environment represented. Upper left has an inset with sound waves drawn as a line graph.
Figure 7.2 Acoustic signals Perceiving the acoustic scene: the acoustic signals converge, resulting in perception. Brush rustles: potential predator? Turtle leaves water: Interference, not a threat. Potential mate singing: Stimulus to prioritize. Water flowing: Noise, background stimulus, not important. Image credit: Created by Natalie M. Lucas. CC BY-NC-SA 4.0.

In contrast with cues, acoustic signals have evolved to allow one individual, called a sender, to influence the behavior of another individual, the receiver. Birdsong, for example, is a signal that communicates to female songbirds that a potential mate is nearby. The male sings with the intent of influencing females to approach and mate. In many species, the song also informs the female of how desirable the male is, because only healthy males can produce songs with a certain level of complexity (Nowicki and Searcy 2004). Acoustic mating displays are common throughout the animal kingdom, including many insects, fishes, amphibians, birds, and mammals.

Acoustic signals can also be used to broadcast the identity of the individual producing the signal, as long as receivers can distinguish the unique features of each individual’s signal. Individual recognition is particularly important to animals that live in large social groups where visual cues are not easily distinguished, as in a flock of nesting seagulls or a herd of elephants (McComb et al., 2000) or where individuals need to find each other over long distances, as in dolphins and other marine mammals (Sayigh et al., 1999).

Animals use acoustic signals to communicate to each other about more than just themselves. When social animals are foraging in large groups, a few individuals may keep watch and signal to the group about sources of danger. Acoustic signals have a particular advantage here because they can travel long distances and around obstacles that would block sight. For example, vervet monkeys have different kinds of alarm calls that indicate the presence of specific kinds of predators (Seyfarth and Cheney 1990). When one monkey sees a leopard, it makes the “leopard” call, and the other members of its troupe will climb into trees, even without themselves seeing the leopard. In contrast, when a monkey makes the “eagle” call, the troupe will hide in nearby bushes that are too thick for an aerial predator to enter. The monkeys’ auditory system must be able to discriminate between these signals for them to respond appropriately.

Humans have evolved a much more elaborate system of acoustic communication called speech, in which words are formed from combinations of distinct vocal elements called phonemes. For example, the vowels in the English words “bat” and “pat” are the same phoneme (/a/), whereas the initial consonants (/b/ and /p/) are different. One of the most important tasks of the human auditory system is to identify and discriminate between similar-sounding phonemes, and as we will see in 7.3 How Does the Brain Process Acoustic Information?, this ability is shaped by the sounds an infant hears in the first year of life.

Some species produce acoustic signals for themselves to hear, with the purpose of detecting obstacles and prey in their environment. Because sound waves reflect from solids and liquids, they can be used for echolocation. The ability to echolocate has evolved separately in bats and in some aquatic mammals. Echolocating animals produce short vocalizations that reflect off nearby objects. Specialized neural circuits use the delay between the signal and its echo to determine distances to those objects, while other circuits compute the direction of the object (see 7.3 How Does the Brain Process Acoustic Information?) and even some of its physical characteristics (Moss and Sinha 2003). A bat hunting by echolocation can easily discriminate between a solid obstacle and a delicious insect, and it can rapidly alter its course of flight to intercept the insect while avoiding the obstacles (Surlykke et al., 2009).

In summary, the sense of hearing serves many important functions for humans and other animals. Acoustic cues and signals are a critical source of information about predators, prey, obstacles, and dangers in the nearby environment, as well as the presence and intentions of conspecifics (other animals of the same species). In the next section, we will see how the physical nature of sound allows it to transmit this information in a way that complements the other senses.

How is sound produced?

What is sound? From the perspective of physics, sound consists of pressure waves moving through air or some other physical material. Waves are found throughout nature and share a common mathematical description. At the same time, acoustic waves have special properties that make them distinct from other kinds of waves.

To illustrate what a wave is, imagine throwing a rock into a still pool of water. When it hits the surface, the rock will displace some of the water, causing it to rise in a circle around the point of impact. Gravity pulls the displaced water back down, which in turn pushes up the water a little further out. This process repeats again and again to produce what we observe: a ripple traveling outward from the point of impact. An acoustic wave moves in much the same way, except that instead of a two-dimensional surface moving up and down, the molecules of the air move closer together and further apart, and the wave spreads in three dimensions.

Now instead of throwing a rock into a pool, imagine clapping your hands together. As your hands meet, they push air out of the space between them. Where does this air go? There is air all around your hands, but air is mostly empty space, so there is plenty of room for the displaced molecules of air to crowd in with the molecules that are already there. Now there is a region around your hands where the air is at a higher pressure, with an increased density of air molecules. The compressed molecules in the region of higher pressure push out into the surrounding space, which is less dense, or rarefied. This creates a new region of higher pressure, which in turn spreads further out from your hands. Just as the water ripple radiates out in a circle from a central point, the pressure wave created by your hands clapping radiates out in a sphere. If a microphone or some other pressure sensor is placed along the path of the wave, it will measure successive increases and decreases in the pressure at that location as the wave travels through space (Figure 7.3).

It is important to notice that the individual air molecules do not move nearly as fast or as far as the acoustic wave. The air itself is not flowing, as in a wind. Instead, neighboring molecules are bumping into each other, and an individual molecule pushed out from a region of higher pressure may only travel a few nanometers before it collides with another molecule. Sound waves move through air at around 343 m/s, but this varies depending on temperature and elevation.

Acoustic waves can be described in terms of three principal quantities: amplitude, frequency, and phase. Amplitude (also called intensity or level) is a measure of how much the air pressure changes between compression and rarefaction. Clapping your hands more forcefully causes a larger increase in pressure and a wave with a higher amplitude. Amplitude is measured in units of pressure (in the metric system, Pascals); however, because the perception of acoustic amplitude follows a logarithmic scale, it is more often reported in units of decibels of sound pressure level (dB SPL). As shown in Table 7.1, each step of 20 on the decibel scale corresponds to a proportional, ten-fold increase in the pressure.

Top row: Sound pressure waves represented as dots of varying density. Below is a line graph of a sound wave with key features labeled: amplitude, period, pressure, time. 2nd row: Left: Image of string oscillating when held at each end. Middle: line drawing of a simple period wave. Right: Line graph of amplitude vs frequency of that simple wave. 3rd row: Left: Image of string oscillating when held at each end plus another separate string which is held at each end and also in the middle. Middle: line drawing of a complex periodic wave. Right: Line graph of amplitude vs frequency of that complex wave. 4th row: Left: Line drawing of turbelent air flow Middle: line drawing of an aperiodic wave. Right: Line graph of amplitude vs frequency of that aperiodic wave.
Figure 7.3 Physical properties of acoustic waves

The frequency of a wave is a measure of how rapidly the air at one location changes from compressed to rarefied and back (Figure 7.3). Frequency is measured in Hertz (Hz), the number of cycles that occur each second. The frequency of a sound wave is perceived as pitch. Low-frequency waves sound deep and rich, whereas high-frequency waves are perceived as sharp and thin.

The phase of a wave is a measure of time relative to the cycles of compression and rarefaction. Phase is not perceived directly, but it affects how different waves interact with each other.

Almost any process that displaces air molecules will result in an acoustic wave. One of the most common sources of sound is periodic vibration. You can generate periodic motion by plucking or striking a string that is under tension (for example, on a guitar, piano, or harp). Pluck it right in the middle, and listen to the sound while watching the string. The vibration will probably be too fast for your eyes to track, so the string will appear wide in the middle and narrow at the ends, as illustrated in Figure 7.3. As it vibrates, the string compresses the air while moving in one direction and rarefies it while moving in the other. The vibrations tend to occur at a specific frequency that depends on the mass of the string, its length, and how tightly it is tensioned. The motion of the string and the sound wave it produces are described by a simple mathematical function called a sinusoid (or sine wave) with a single frequency and amplitude. The form of this function and its relationship to the physical properties of the string will be covered in most elementary physics texts.

Amplitude
dB SPL Pa Example
0 0.00002 Mosquito 3 m away in silent room (human hearing threshold)
10 0.000063 Calm breathing
30 0.00063 A quiet office with computer off
50 0.0063 Normal conversation 1 m away
70 0.063 Comfortable music listening level
90 0.63 Traffic on a busy roadway
100 2 Jackhammer 1 m away
120 20 Jet engine 100 m away (risk of instantaneous noise-induced hearing loss)
170 6300 Firecracker 0.5 m away
Table 7.1 Properties of sound

Now press the string down or pinch it at its midpoint before plucking it. The half that you pluck will vibrate twice as fast as the whole string did, producing a sound wave with twice the frequency. You might notice that the higher note sounds similar even though the frequency is higher. If you pinch the string three-quarters of the way and pluck the short part, the frequency will be four times as high, but the note will still sound similar. This is an illustration of how the perception of frequency is logarithmic, as we’ll discuss more later.

What happens if you pluck the whole string, but nearer to one end? You might notice that the note sounds richer and more complex, and that the movement of the string looks more complicated, without one clear wide part in the middle. Just as white light is composed of electromagnetic waves with frequencies spanning from red to violet, sounds can be composed of multiple waves with different frequencies. When the guitar string is plucked nearer the end, it will vibrate at the frequency corresponding to the full length of the string but also at twice that frequency (corresponding to half the length), three times that frequency (corresponding to one third of the length), and so on. A series of integer multiples like this is called a harmonic series. The lower-frequency and higher-frequency sound waves add together to produce complex harmonic motion. The specific combination of frequencies in a complex sound is called its spectrum. Most simple and complex periodic sounds are perceived as tones with a defined pitch that corresponds to the lowest frequency in the harmonic series.

Take a moment to think about this: you hear the string on the guitar as a single note, yet it is composed of several waves with different frequencies. As we will see later, each of these waves activates a different set of receptors in the inner ear and a different set of neurons that transmit information into the brain. In order for us to understand how audition works, we will need to understand how these different channels of information are combined by the brain to produce a single unified percept or split apart to distinguish between different sources, as when a person is singing along to the guitar.

Many species of animals generate sounds through periodic motion. The chirp of a cricket is produced by rubbing two textured surfaces on their wings together. The vocal folds in the mammalian larynx and the avian syrinx vibrate periodically as air is exhaled, and the frequency of the vibrations can be controlled by muscles that increase or relax the tension. In human speech (and even more so in singing), the vowels are produced by periodic movement of the vocal folds.

Sounds that do not have a regularly repeating pattern are not perceived as having pitch and are called aperiodic. The main sources of aperiodic sounds are transient collisions between surfaces and continuous, turbulent air flow. In turbulent air flow, the molecules are not all moving at the same speed and direction. This causes eddies and ripples to form as faster-moving air collides with slower-moving air, and the pressure variations in the ripples propagate outward as acoustic waves. Turbulent flow tends to occur whenever air is forced through narrow openings or disorderly obstacles like trees. You can make turbulent air flow by lightly pursing your lips and exhaling forcefully (but not forcefully enough to make a whistle). Aperiodic sounds do not contain a single frequency or even a regular combination of frequencies as in complex harmonic sounds. Instead, the pressure fluctuates at random across a broad range of frequencies. This is perceived as noise. A continuous sound that has equal amplitude across all frequencies is called white noise. Aperiodic sounds are also commonly used in acoustic communication. In human speech, many of the consonants are produced by aperiodic sounds.

How does sound travel and interact with objects?

Sound’s ability to travel through the air is what allows the auditory system to sense cues and signals coming from a distance. In this respect, the sense of hearing shares certain similarities with vision, but acoustic waves are much slower and have longer wavelengths than visible light. Humans can hear sounds with frequencies between approximately 20 and 20,000 Hz. In air, these frequencies correspond to wavelengths between 17 m and 17 mm. In contrast, visible light has wavelengths between 400 and 780 nm. In other words, whereas light waves are much, much smaller than most behaviorally relevant objects, sound waves are around the same size. As a consequence, acoustic waves exhibit a number of phenomena that are important to understanding how sound works.

First, sound waves can diffract around common objects. You can illustrate diffraction by placing a stone about 1 cm in diameter in the center of a shallow bowl of water. Tap the water near the edge of the bowl and watch how the ripples seem to pass right through the stone. This is diffraction. Waves can diffract around objects that are around the same size or smaller than the wavelength, which means that lower frequencies (longer wavelengths) are less likely to be blocked. Thus, a predator may be impossible to see when it hides behind a rock or a tree, but most of the noises it makes will simply pass around the object.

Second, solid objects are more likely to transmit sound waves than visible light waves. Low-frequency sounds in particular are more easily transmitted through solids, which is why you can hear construction noises through solid walls and the bass line from your downstairs neighbor’s music. At higher frequencies, solids tend to reflect sound. Reflections cause an acoustical phenomenon called reverberation, in which echoes of a sound reach the same point at slightly different times. For example, if you are in a large room with a concrete floor and metal walls, and someone a few meters away from you claps, you will hear not only the wave coming directly from the source to your ears, but all the other waves that reflect off the floor and the walls. The reflected waves have longer to travel, and because the speed of sound is around 343 m/s, there can be a perceptible delay between the time when the direct wave and its reflections arrive. Reverberations with long delays are heard as distinct echoes. Reverberations with short delays tend to fuse with the main sound, making it sound more “alive”. Surfaces that are soft or uneven are better able to absorb sound, reducing the intensity of the reverberation, and rooms that have little reverberation sound muffled or “dead”. With practice, people can learn to use the acoustical characteristics of different materials as cues to their location within a room, an ability akin to the specialized echolocation of bats and dolphins.

The long wavelengths and reflectiveness of sound cause another phenomenon called resonance. If sound waves enter a space where they can reflect back and forth between two surfaces, they will interact with each other through interference. If the sound has a wavelength the same as the distance between the two surfaces (or an integral fraction of the distance), the peaks and valleys of the wave going in one direction will overlap with the wave going in the other direction, leading to constructive interference. Wavelengths that are not an integral fraction of the difference will not overlap, leading to destructive interference. For complex sounds comprising waves of multiple frequencies, resonance boosts certain frequencies (wavelengths) while attenuating others. This effect is also known as filtering. When you blow across the mouth of a bottle, the aperiodic noise produced by turbulence resonates in the bottle, boosting a single frequency (and its harmonics) to produce a tone.

Filtering is important to the production of speech. Recall that vowels are produced by periodic motion of the vocal folds. But how do we make different vowels? You can illustrate this by singing “aah” and then changing to “ooh” while holding the same pitch. If you pay close attention to your tongue and lips, you’ll notice that the shape of your mouth changes as you transition between different vowels. What’s happening as you move those muscles is that the sizes of the oral and pharyngeal cavities in your airway are changing to create different resonances that boost and suppress specific frequencies in the sound coming from your larynx.

A consequence of these phenomena is that it is common for a listener to hear many different sound sources simultaneously. This is an advantage in that it is possible to hear sounds without having to look at them, to take in an entire auditory scene in full 360-degree surround, and to hear multiple instruments or singers making music together. It also makes a difficult task for the auditory system, which has to separate out multiple sound sources and determine their locations. As we shall see, the structure of the ear performs a part of this task, separating complex sounds into their component frequencies, but the central nervous system still has an enormous amount of work to do to transform auditory scenes into coherent perception.

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution-NonCommercial-ShareAlike License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-behavioral-neuroscience/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-behavioral-neuroscience/pages/1-introduction
Citation information

© Nov 20, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.