Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Behavioral Neuroscience

19.3 What Happens to Unattended Information?

Introduction to Behavioral Neuroscience19.3 What Happens to Unattended Information?

Learning Objectives

By the end of this section, you should be able to

  • 19.3.1 Identify factors that impact the degree to which unattended information is processed.
  • 19.3.2 Articulate the perceptual load theory and explain its relationship to debates concerning attentional selection.

So far in this chapter, we've talked mostly about the brain systems involved in selecting and processing the things that are the focus of your attention, without much consideration for what happens to the things that are not the focus of your attention at any given moment. We discussed at the beginning of the chapter that attended information benefits from enhanced sensory processing and gains access to our conscious awareness, but what is the fate of everything else? In this section, we'll unpack some of the limits to processing unattended information and discover how those limitations reveal critical bottlenecks in the brain's ability to deal with competing sources of information.

Inattentional blindness

In the beginning of this chapter, we met Xander, who was concerned that there was something wrong with their eyes because they failed to notice a gorilla in the middle of a video showing people passing a basketball (Figure 19.1). That anecdote stems from a famous experiment performed by Simon & Chabris (1999), in which participants did the exact task described earlier, namely, count the number of basketball passes performed by a group of players moving around a scene (if you'd like to watch the full video yourself, visit Dan Simon's website). In their study, roughly 50% of participants failed to notice a person in a gorilla suit walking through the scene (even when it stops in the middle of the room and beats on its chest!). They accurately counted the number basketball passes, so their failure didn't stem from a lack of attention in general. Rather, attention was so focused on the primary task (counting passes) that they were completely unaware of other (even strikingly novel!) information that passed before their eyes. As mentioned earlier, our top-down goals can affect the degree to which bottom-up information can capture attention. This demonstrates the phenomenon of inattentional blindness (originally coined by Mack & Rock, 1998) and it points out that much of the information that is present in our sensory world escapes our conscious awareness. Inattentional blindness is not limited to laboratory settings. In fact, several studies have shown that people will fail to notice a clown riding next to them on a unicycle (Hyman et al., 2010) or money dangling from a tree (Hyman et al., 2014)—even when they have to move to one side or the other in order to avoid walking into the branches with the money!

Inattentional blindness shows that we often fail to notice salient information in plain sight, provided that our attention is engaged on a different source. In a similar manner, we often fail to notice significant changes to visual information that we are processing—a phenomenon known as change blindness (for a review of the differences between inattentional blindness and change blindness, see Jensen et al., 2011). For instance, in one striking experiment, a researcher stopped pedestrians on a sidewalk to engage them in a conversion, but partway through, a large obstacle passed between the researcher and the pedestrian. Unknown to the pedestrian, the researcher who started the conversation (Researcher A) switched places with a different researcher (Researcher B) during the obstruction. Roughly half of the participants failed to notice that they were talking to a totally different person (Simon & Levin, 1998)!

In both inattentional blindness and change blindness, we are sometimes completely unaware of obvious salient and novel information. But what does our brain do with this information? Do these failures imply that we engage in little to no processing of ignored input? Or, rather, does our brain fully encode the basic sensory information but that information somehow does not gain access to conscious awareness? In the next section, we will consider an important historical debate concerning when, and how, unattended information is filtered out by the brain.

Early/late selection

Early attention researchers clearly understood that not all information is processed equally but wrestled with the best way to characterize the nature of selective attention. In the 1950s, Broadbent (1958) developed a model (Figure 19.10) that envisioned information processing as proceeding along a series of stages that go from rudimentary (e.g., edge and color detection) to complex (e.g., whole object descriptions, memory matching, etc.). After information passes through these various stages, it can then gain access to higher-level executive functions (e.g., decision making) and conscious awareness. In the model, you see channels (represented by arrows) that reflect different sources of information (these could be different locations in space, different objects, etc.).

Top: A flowchart of early selection. Attended and unattended sensory inputs go to registration but only attended information passes to perceptual analysis and then onto semantic encoding and executive functions/decisions, memory etc. Bottom: A flowchart of late selection. Attended and unattended sensory inputs go to registration then perceptual analysis and then onto semantic encoding. Only attended information then goes on to and executive functions/decisions, memory etc.
Figure 19.10 Broadbent's model of selective attention

Broadbent (1958) argued that attention works like a filtration system and that unattended sources of information are filtered out relatively early in the game, prior to complete processing, a concept known as early selection. Many sources of evidence are consistent with this view. For instance, Cherry (1953) presented participants with speech over headphones and asked them to repeat back what they heard in one ear specifically (different speech was presented to each ear). This is a difficult task known as dichotomous listening, but people can do it quite well with a little training. The interesting thing about these dichotomous listening paradigms wasn't what people were able to report about the speech that they were attending. Rather, after the experiment was over, the researchers asked questions about the speech that was unattended and found that participants might be able to remember lower-level (i.e., basic or sensory) features of the speech such as the pitch or gender of the speaker, but often failed to notice higher-level (i.e., conceptual or semantic) features of the speech such as the language (German vs. English) or whether it was forward vs. backwards speech. Apparently, relatively basic information processing occurred for the unattended information, but higher-level information processing related to the meaning (i.e., semantics) of the speech did not, consistent with early selection.

Not all studies are consistent, however, with the notion of early selection. In fact, other research (e.g., Deutsch & Deutsch, 1963) suggests that all sources of information proceed through relatively late stages of semantic processing, and that only then does attention filter out unattended sources from engaging with executive functions or conscious awareness—a concept known as late selection. Early evidence of late selection also comes from dichotomous listening tasks. For instance, Moray (1959) showed that when a participant's name is presented to the unattended ear, they will reliably notice it and switch the focus of their attention to that ear. You've probably experienced this phenomenon yourself. If you're in a crowded room with many people talking and you're focused on having a conversation with one person, then you probably won't know much about the other conversations going on around you. But, the second someone says your name, it will capture your attention—even if you unaware of everything else that person was saying up until that point. This phenomenon is sometimes referred to as the Cocktail Party Effect and it suggests that, at least in some cases, unattended information is processed to a relatively high level before it is filtered out through attention (you must have processed the meaning, or semantics, of the other conversation to know that they said your name). Interestingly, not all names are the same—other studies (Howarth & Ellis, 1961) showed that a person's own name captured attention more easily than another person's name, further supporting the idea that higher-level semantic features of unattended information (e.g., personal relevance) were being processed.

So, does the attentional filter operate early or late? A classic "either-or" question that has been debated for many years in psychological circles. As with most such questions, and as we'll see in the next section, the answer is probably "both".

Science as a process: Perceptual load and neural correlates

Does selective attention filter out information early (prior to higher-level semantic processing), or late (after such processing has occurred)? Various attempts to accommodate both answers within Broadbent's (1958) information processing framework have been proposed over the years (e.g., Treisman, 1960). However one of the most successful attempts to integrate these two viewpoints comes from the perceptual load theory (Lavie & Tsal, 1994; Lavie, 1995). According to this theory, attention is a limited capacity resource (i.e., there's not enough to go around) that must be allocated strategically to different sources of information. This allocation happens dynamically, depending on a variety of factors, including how much of a strain any given task puts on information processing systems (i.e., perceptual load). When we perform a relatively difficult task (high load), we expend considerable attentional resources to complete it, but when we perform a relatively easy task (low load), we need not engage as much of the attentional currency. The theory also posits that any leftover attentional resources that aren't engaged by the attended task will automatically be deployed to task-irrelevant information.

The perceptual load theory makes several interesting predictions about how much information processing occurs for unattended sources. For instance, the theory argues that if you are engaged in a relatively easy task, then significant attentional resources will be "left over" to be applied to unattended information, which will therefore be processed quite fully and appear to reflect late selection. If, however, you are engaged in a relatively difficult task, then the limited attentional resources that are left over and allocated to unattended information will result in quite impoverished processing and appear to reflect early selection. A real-world example might be the degree to which you notice billboards along the side of the road. If you are driving in very difficult conditions (bad weather, difficult terrain, etc.), you will be so focused on the central task of driving that you will not have any leftover attentional resources to process the billboards. If, however, you are driving in relatively easy conditions, then driving will not consume all of your attentional resources, and the leftover resources will be deployed to unattended information on the side of the road such as the billboards.

There is a large body of evidence to support the perceptual load theory. For instance, Miller (1991) showed that when participants attended to a central letter on a computer screen and ignored surrounding letters, the distracting letters influenced response times more so when the task was easy (low load) compared to when it was hard (high load). Functional MRI research also shows that perceptual load influences the degree to which visual information processing occurs for unattended sources of information. For instance, Schwartz and colleagues (2005) conducted an fMRI experiment in which participants attended to a stream of upright and inverted “t”s in the middle of a computer screen and ignored a flashing checkerboard pattern in the periphery (Figure 19.11). In some cases, participants performed a relatively easy (low load) task—responding to red “t”s, whether they were upright or inverted (recall that searching for a singleton feature such as color requires little effort). In other cases, participants performed a much harder (high load) version of the task—responding to upright blue “t”s or inverted yellow “t”s (i.e., a conjunction search, which is more difficult). The researchers in this study took advantage of the fact that visual areas exhibit retinotopic organization, which means that the spatial arrangement of information on the retina corresponds to the spatial arrangement of the neurons that process that information in each visual area. Thus, they were able to disentangle brain activity from V1, V2, etc.

Top: A diagram of the stimuli presented in the perceptual load experiment described in the main text. Participants search for a specific target in a stream of upright and inverted “t”s while ignoring a flashing checkerboard pattern in the periphery. Bottom: A diagram of a human brain with V1-V4 regions highlighted plus a bar graph showing fMRI activity in a range of early visual areas was greater to the checkerboard pattern when the participants completed the low-load version of the task compared to the high-load version, consistent with perceptual load theory.
Figure 19.11 Perceptual load theory and neural correlates Checkerboard pattern: By Sven Hermann, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=124163044

Schwarz and colleagues (2005) found that the irrelevant checkerboard pattern produced significantly greater brain activity in a range of early visual areas in the low load condition compared to the high load condition (Figure 19.11). Note that the visual displays were identical in both the high and low load cases, but since (according to the perceptual load theory) greater attentional resources would automatically be deployed to the unattended checkerboard under low load compared to high load conditions, there would more extensive neural processing of the checkerboard pattern during the easy version of the task compared to the hard version. Converging ERP evidence (Handy et al., 2001) shows that early visually-evoked potentials mirror the fMRI results, with greater P1 component amplitude to unattended distractors under low perceptual load relative to high perceptual load (see Methods: ERP, Methods: MRI/fMRI).

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution-NonCommercial-ShareAlike License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-behavioral-neuroscience/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-behavioral-neuroscience/pages/1-introduction
Citation information

© Nov 20, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.