Learning Objectives
By the end of this section, you will be able to:
- Identify the major research methods for studying individuals across the lifespan
- Describe the strengths and weaknesses of different research methods
- Compare and contrast correlation and causation
- Discuss ethical considerations in lifespan developmental research
Ryne works for the federal government and was recently put in charge of selecting an anti-bullying program to recommend to all school districts in the country. Many programs have been created and tested in different settings over the past few years. They vary on the age range of the children they were tested on, the time each takes to deliver, the number of sessions, whether the participants’ behavior was tracked across time, and whether a teacher, a principal, or a safety officer delivered the program. Luckily, Ryne has training in social science research methods and was able to think through relevant issues to help make the determination. For example, it was important to consider which programs show a cause-and-effect relation between program delivery and a decrease in school bullying, and also whether any unique aspects of the settings in which each program was tested might disqualify them for nationwide implementation.
Ryne’s job shows how important research skills are in the scholarship and application of issues across the discipline of developmental psychology. This way of thinking is like what psychological scientists do: they look at evidence carefully, think about issues that could affect the results, and focus on studies that show a clear connection between cause and effect. Ryne is using this scientific way of thinking to deal with a real-world problem.
Psychological Research
While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or because they are happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the developmental psychologist must be creative in finding ways to better understand behavior. An important foundation of the study of lifespan development is understanding how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives.
While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. It is through systematic scientific research that we are able to disengage ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.
The goal of all scientists is to better understand the world around them. Developmental psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (bodily) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical: It is grounded in objective, tangible evidence that can be observed and replicated.
The Process of Scientific Research
Scientific knowledge is advanced through the scientific method, in which ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations). Those empirical observations lead to more ideas that are tested against the real world, and so on.
In this sense, the scientific process is a cycle of deductive and inductive reasoning (Figure 1.16). Researchers test ideas: in deductive reasoning, ideas are tested in the real world; whereas in inductive reasoning, real-world observations lead to new ideas. These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on these aspects. For example, case studies are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.
If both theories and hypotheses are ideas, what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory. For example, many of the theories discussed thus far have been tested by many, many research studies over multiple decades resulting in a more current version of the theory. This is demonstrated in things like our updated understanding of language development and the role of the environment in learning language. A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests (Figure 1.17). Note that theories are not typically proven: rather they are supported, modified, or rejected.
Psychological Research in Developmental Psychology
Developmental psychology, as part of the broader scientific community, has several overarching goals. Those goals can be organized to match the way a developmental scientist may approach any new topic of inquiry. The first goal is accurate measurement and description of the phenomenon in question. With proper research methods and techniques come detailed descriptions of the research question. In the case of cognitive development, for example, Piaget pioneered the clinical interview method to uncover children’s underlying thought processes. This led to rich descriptions of the reasoning abilities of children of various ages. To summarize this first goal, it seeks to answer the question, "What is happening?"
The next goal of science is to understand and explain a phenomenon. It is not enough to merely describe what is observed. Rather, we want to understand the processes that make that phenomenon work, and to explain it to others in a coherent and complete way. Piaget developed and then tested a complete theory of how children solved problems about their world. The result is his theory of cognitive development, which attempts to explain how children’s thinking abilities develop over the first twenty years of life. To summarize this second goal, we now ask, "Why is this thing happening?"
The final goal of developmental psychology science, which relies on our understanding from achieving the second goal, is to apply and control the phenomenon. That is, developmental scientists aim to take their findings and explanations and apply them in the service of the individual or society. In this way, the goals of the scientific method mirror what humans have done as a matter of survival for millennia: adapt to and strive to thrive in the physical and social world, thereby assisting in our shared survival and enjoyment of life. To summarize this final goal, we are asking, "If/how this thing is going to happen again, how can we (or should we) control it?"
Reliability and Validity
Whether you are on a path to become a health care professional, educator, or psychologist, or just have an interest in human development across the lifespan, scientific literacy is a valuable life skill. Being able to discern the credibility of various studies or statistics is important. Understanding statistics and scientific findings, such as percentages, can help you determine which research results are worth applying. For example, if a medical doctor gives you two options for a medicine, it would be helpful to know which one is most effective and whether its benefits outweigh the potential costs. Because much research in developmental psychology requires studies across longer timespans, it is helpful to know how to trust in the findings of a study and differentiate trustable research from bogus science. Being able to determine bogus science can also help you make effective choices for your health and well-being rather than risk wasted money or time on something that doesn’t work.
For research to be trusted, it needs to be proven to be both reliable and valid (Figure 1.18). The ability to consistently produce a given result is referred to as relability. In the context of psychological research, this would mean that any instruments or tools used to collect data do so in consistent, reproducible ways.
Unfortunately, being consistent in measurement does not necessarily mean that you have measured something correctly. To illustrate this concept, consider a kitchen scale that would be used to measure the weight of cereal that you eat in the morning. If the scale is not properly calibrated, it may consistently under- or overestimate the amount of cereal that’s being measured. While the scale is highly reliable in producing consistent results (e.g., the same amount of cereal poured onto the scale produces the same reading each time), those results are incorrect because the tool’s calibration is off. This is where validity comes into play.
Validity refers to the extent to which a given instrument or tool accurately measures what it’s supposed to measure. For example, a driving test to determine if someone is ready to have a driver's license is likely to be more valid if it ensures the person knows the rules of driving (written portion) and can also drive a vehicle effectively (applied portion). If we only used one of those, getting a driver's license might not accurately indicate that a person can drive safely. Researchers strive to design studies and use instruments that are both highly reliable and valid.
The temptation to make erroneous cause-and-effect statements based on correlational research is a common way people tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations, or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full
There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. A meta-analysis of nearly forty studies consistently demonstrated, however, that a relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.
Why are we apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias. Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).
When evaluating findings, it is important to consider the extent to which a study complies with established standards of research integrity, and also to understand that an initial finding may suggest an exciting discovery but may not prove to be accurate once further research is conducted to replicate the results. For example, the popular idea that there is a connection between vaccines and autism spectrum disorder originated from findings that were falsified and later retracted. While there were seemingly logical connections—the Centers for Disease Control and Prevention recommending that mercury be removed from vaccines, similarities between symptoms of mercury poisoning and symptoms of autism spectrum disorder—the empirical evidence showed that there was no causal connection between those observations. The now-debunked study connecting vaccines to autism spectrum disorder remains a popular concept but is not supported by science. This case highlights the importance of scientists complying with the standards set by regulating bodies.
Another example—the so-called “Mozart effect”—elucidates the importance of replicability. One study may find certain results, but before those are generalizable, researchers in psychology need to demonstrate that the same results occur in other studies. Due to the nature of the time required to study change in a human, sometimes it takes several years, or even decades, to test whether results are replicable. When Rauscher and colleagues (1993, 1995; Jenkins, 2001) reported that listening to music by Mozart improved how children performed on tests, it spawned a popular trend to integrate that music and concept into children’s toys, books, and educational programming. While Rauscher’s findings were true to that particular study, subsequent researchers could not replicate the surprising findings. In further studies (Jenkins, 2001; Rauscher, 1995), it was determined that there was little to no scientific support for the Mozart effect (Pietschnig et al., 2010). In this example, we see that replication is important in psychological research, especially in context of it being applied to everyday life.
Research Design
Many research methods are available to psychologists in their efforts to understand, describe, and explain behavior, change, and growth, plus the cognitive and biological processes that underlie such change. Some methods rely on observational techniques, while other approaches involve interactions between the researcher and the individuals who are being studied. Each research method has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce large amounts of information, but the ability to apply this information to the broader population (representativeness of the sample) is somewhat limited because of small sample sizes.
Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in artificial settings. This calls into question the ecological validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical issues.
Case Studies and Naturalistic Observation
The field of developmental psychology has a rich history of using descriptive methods to explore change, growth, and stability in individuals over time. In a case study, a great amount of detail is gathered about one or more individuals of interest, to gain a thorough understanding of each person. At the end of a case study, we likely have a good sense of an individual’s life and developmental history. The downside is that the insight we’ve gleaned is often applicable only to the subject of the case study. In other words, it does not have ecological validity because it does not generalize to people other than those being studied. Still, there are usually take-away findings that we may be able to generalize to the larger population. In one case study, Oliver Sacks discusses a man who suddenly began having trouble understanding what the familiar objects in his environment were—when he looked at his wife, for example, he gave his best guess that he was looking at a hat rack. After extensive evaluation, Sacks determined that the man had developed visual agnosia—he had the ability to see but could not make sense of what he was looking at. If his wife spoke, or if he reached out and touched her, he knew instantly that it was her—his problem was only with visual recognition. Through his study and resulting case report, Sacks developed a detailed understanding of the underlying neurological problem, its development, and the likely course of the disease progression (Sacks, 1985, 2007).
The research method of naturalistic observation is the observation of research participants in real-life settings. If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. For instance, if we wanted to study whether older adults benefit from playing video games, we might start by observing older adults in the recreational areas of an assisted living facility. The downside is that in true naturalistic observation, there is no control over the setting, and researchers are unable to interact with those being observed because people might change their behavior in unexpected ways if they know they are being observed. As an example, imagine that your instructor asks everyone in your class if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?
An example of structured observation, a type of observation where people are observed while engaging in set, specific tasks, comes from the Strange Situation, developed by Mary Ainsworth. The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant on being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.
A benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means there is a higher degree of ecological validity, or realism, than might be gathered with other research approaches (like asking about handwashing instead of observing it). Therefore, the ability to generalize the findings of the research to everyday situations is enhanced. The power of naturalistic observation is to give the researcher ideas about what factors or variables may be relevant to include in a more structured research design later on. The major downside of naturalistic observation is that it is often difficult to set up and control. Another potential problem in observational research is observer bias. Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability: a measure of reliability that assesses the consistency of observations by different observers.
Correlation: How Variables Relate
The next level of research beyond describing a psychological phenomenon is to begin to piece together how variables of your chosen topic might be related. The statistical technique used to determine the degree of relation or association between two or more variables is called correlation. While correlation means there is a relationship between two or more variables, this relationship does not necessarily imply cause and effect. When two variables are correlated, it simply means that as one variable changes, so does the other. We can measure correlation by calculating a statistic known as a correlation coefficient. You might notice that the word “correlation” includes the word “relation”—a helpful reminder that it represents variable relations.
A correlation coefficient is a number from –1.00 to +1.00 that indicates the strength and direction of the relationship between variables. The number portion of the correlation coefficient indicates the strength of the relationship. The closer the number is to 1 (be it negative or positive), the more strongly related the variables are, and the more predictable changes in one variable will be as the other variable changes. The closer the number is to zero, the weaker the relationship, and the less predictable the relationship between the variables becomes. For instance, a correlation coefficient of 0.9 indicates a far stronger relationship than a correlation coefficient of 0.3. If the variables are not related to one another at all, the correlation coefficient is 0.
The sign—positive or negative—of the correlation coefficient indicates the direction of the relationship (Figure 1.19). A positive correlation means that the variables move in the same direction. Put another way, it means that as one variable increases so does the other, and conversely, when one variable decreases so does the other. A negative correlation means that the variables move in opposite directions. If two variables are negatively correlated, a decrease in one variable is associated with an increase in the other and vice versa. A helpful way to remember positive correlation is to think of two variables riding an elevator: the variables travel together, up or down. A negative correlation indicates the variables traveling in opposite directions, such as two friends waving at each other as they pass each other on escalators going in different directions: one goes up, the other goes down.
An example of positive correlations is the relationship between an individual’s height and weight. Typically, someone who is taller will also weigh more than someone who is much shorter. One might expect a negative correlation to exist between someone’s tiredness during the day and the number of hours they slept the previous night: the amount of sleep decreases as the feelings of tiredness increase. In a real-world example of negative correlation, student researchers at the University of Minnesota found a weak negative correlation (r = –0.29) between the average number of days per week that students got fewer than five hours of sleep and their GPA (Lowry et al., 2010). In other words, more sleepless nights was related to a lower GPA. Keep in mind that a negative correlation is not the same as no correlation. For example, we would probably find no correlation between hours of sleep and shoe size.
Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect. While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable, is actually causing the systematic relationship between our variables of interest. Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive.
Recall that often dozens of variables or factors are working together to produce even the simplest human behaviors. What this means in practical terms is that you could have two variables moderately correlated with one another, but a third one that’s independently influencing the two variables you studied. This is called the third variable problem. Take aggressive behavior and violent video game playing, for example. Correlational research shows an association between playing violent video games and acting out aggressively (e.g., Dickmeis, 2019). However, it’s entirely possible that a third variable is related both to aggressive outbursts or separately to a preference for playing violent video games.
Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, research found that people who eat certain breakfast cereal may have a reduced risk of heart disease (Anderson et al., 2000). Cereal companies are likely to share this information in a way that maximizes and perhaps overstates the positive aspects of eating cereal. But does cereal really cause better health, or are there other possible explanations for the health of those who eat cereal? For example, consistent healthy dietary habits or exercise might better explain this association. While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.
Link to Learning
Manipulate this interactive scatterplot to practice your understanding of positive and negative correlation.
Experiments: Cause and Effect
The only way to establish a cause-and-effect relationship between two variables is to conduct a scientific experiment. Experiment has a different meaning in the scientific context than in everyday life. In everyday conversation, we often use it to describe trying something for the first time, such as experimenting with a new hair style or trying a new food. However, in the scientific context, an experiment has precise requirements for design and implementation.
The experimental method, a research design used to determine cause-and-effect relationships including specific design requirements, is a common tool psychological scientists use to investigate development and influences on that development. Some developmental psychologists will study people of a particular age, while others will run experiments across participants who span a variety of ages. The major strength of the experimental method is its ability to determine cause and effect relationships among variables. In one classic experiment, Condry and Ross (1985) tested the idea that knowing the gender of two children behaving aggressively would alter observers’ rating of the aggressiveness of the interaction. By altering the gender label attached to video footage of two children engaged in a snowball fight and changing nothing else, the researchers showed that young adult participants rated the boy-boy interaction as least aggressive. The authors concluded that the perception of aggression in children is influenced by observer’s different expectations of the behavior of boys and girls. (The finding supports the theory of gender-schema theory [Bem, 1981], the notion, which is discussed later, that we form different associations and expectations based on gender categories from an early age.)
In this example, we have all the primary ingredients of the experimental method. First, the experiment is based on a theory (the overarching explanation for a set of observations). The theory generates one or more hypotheses (theory-based predictions that are testable). In our example, the hypothesis was that the gender of the observed children would make for different perceptions of aggressiveness. In order to test this hypothesis, or prediction, the researchers created different conditions, or levels, of an independent variable, the variable that is altered in an experiment and is expected to be the cause or influence of some outcome behavior. In fact, the resulting outcome behavior that is measured in an experiment is called the dependent variable, because values of this variable are determined by, or dependent upon, the value of the independent variable. In the study by Condry and Ross (1985), there were four conditions, or levels, of the independent variable: boy-boy, boy-girl, girl-boy, and girl-girl. Note that the gender label was the only thing the researchers altered: in all four conditions, the same video footage was used, and only the gender labels were different.
Participants (those watching videos) were randomly assigned to view one of the four conditions. A random assignment means each participant has an equal chance of being placed in each condition. In fact, random assignment is an essential aspect of the experimental method; without it, you may end up with disproportionate number of male participants in one condition and few in another. Random assignment ensures an even distribution of all relevant participant characteristics across all experimental conditions. After observing the assigned video footage, participants in each condition were then asked to rate one of the children along a variety of characteristics, including how aggressively they behaved. The researchers then compared the average ratings of aggressiveness across the four conditions to see whether their prediction was accurate.
The experimental method is the only research method that allows a psychological scientist to conclude that changes in one variable (in this case, gender label) caused changes in another (perception of aggression). The reason this conclusion is valid, or an accurate statement of the facts, is that the researchers have carefully manipulated the situation to alter only the variable of interest, and they used random assignment of the raters to neutralize all other relevant factors that could have contributed to the observed outcome. Without random assignment, other variables might explain the observed differences in ratings of the children’s aggressiveness. For instance, if the researchers let the first ten participants who showed up watch the same video segment, by happenstance, these might have all been members of the college football team and that might bias their ratings. Instead, random assignment gave each participant an equal chance to observe each possible labeled video clip.
The ability to pinpoint cause-and-effect relationships among two or more variables comes from careful control of the situation, but it does have one potential drawback. Because the experimental scenario is so carefully controlled, the question of artificiality arises. That is, how closely does the experimental scenario match naturally occurring relevant scenarios outside the psychologist’s laboratory? This question examines the characteristic called external validity, the degree to which an experiment's results and reality match. Low external validity, on the other hand, is a significant threat to the applicability of laboratory findings to the everyday world. After all, if the experimental setup is so contrived that it doesn’t match any situation you might normally encounter, then the value or utility of the finding is in question. What can we do with a research finding that doesn’t apply in everyday life? In our example from Condy and Ross (1985), external validity would appear to be high, because the experimental setup used video footage of actual children engaged in a snowball fight. We may not always know the gender of the children involved in every such encounter, but when we do, we can expect that information to inform our judgments.
Other drawbacks of the experimental method include the potential cost and time it might take to devise an experiment to test certain topics. In fact, it is not possible, feasible, or sometimes even ethical to test some topics with the experimental method. For example, for both practical and more importantly ethical reasons, a researcher interested in studying the effects of divorce on adolescent development couldn’t randomly assign adolescents to different conditions of the independent variable—in this case, whether parents divorce or not. For important topics such as these, psychologists have to wait for conditions to arise through the natural course of life events—this is called a quasi-experimental design.
Because researchers in a quasi-experiment can’t randomly assign participants to the conditions, they lose a considerable amount of control as other variables enter the picture. Perhaps families that experience divorce have a lower household income on average. Or perhaps those who do not experience divorce have a higher tendency to belong to a religion that forbids divorce. These and many other variables could make for meaningful ways that the “divorce” and “no divorce” conditions vary in a natural or quasi-experiment, making it very difficult to establish a clear cause-and-effect relationship between divorce itself and adolescents’ psychological outcomes. However, as noted, a quasi-experiment is the only way to investigate some topics.
Time as a Variable
Developmental psychology has designed some additional research approaches specific to the fundamental questions of the field and focused on understanding growth and change in individuals over time. To do this, researchers often employ the cross-sectional or longitudinal method, or the cross-sequential design, which combines the advantages of both. All three designs can be used with correlational, experimental, case study and naturalistic observation methods.
Longitudinal Design
A longitudinal design studies a group of participants over a period of time, re-assessing them at various points. If we are interested in the development of friendships during adolescence, we might recruit a group of fifty sixth-grade students. We would give them a personality inventory, collect background information about each, and ask them to complete surveys about their friendships. Then, we would find these same fifty participants at six-month or one-year intervals, re-assessing the same information. At the end of five or six years, we’d have a rich data set and a really good idea about how the number, type, and quality of friendships change across adolescence.
Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might increase or decrease the risk of developing of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over twenty years to determine which of them develop cancer and which do not. Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of smoking and cancer (American Cancer Society, n.d.).
As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. This is known as attrition, the gradual loss or dropping out of participants from the original pool. Another issue is test familiarity, known as practice effects. Since participants are given the same battery of measures including surveys multiple times, they might get used to the questions, which could alter the way they think about and respond to them.
Finally, and this is the most serious challenge in longitudinal research, the longer the study duration, the higher the risk of encountering cohort effects. This means that the research results, which in our hypothetical study would take five or six years to obtain, might end up being limited in their applicability beyond a certain cohort. What if, toward the end of the study, a new virtual reality app was released that changed how teenagers communicate with each other? That would make our findings, and the six years of work that went into producing them, of limited value and possibly obsolete. Nevertheless, a longitudinal design comes closest to observing change within individuals over time, making this a highly valid and valuable approach.
Link to Learning
View this TED talk featuring the longest-running study that has followed the life trajectories of thousands of British children for the past seventy years and learn about the study’s major findings.
Cross-Sectional Design
The cross-sectional design is a more common developmental research design and it offers solutions for most of the drawbacks we mentioned with longitudinal designs. A cross-sectional design studies groups of participants of different ages, that is, multiple segments of the population at a given time. In our friendship example, researchers would identify a group of twelve-year-olds, fourteen-year-olds, sixteen-year-olds and eighteen-year-olds and assess them all on a variety of measures. Thus, they would have results suggesting the developmental progression of friendship type and quality across adolescence in a very short time, possibly a few months as opposed to six years. Attrition is not an issue, nor are practice effects because all the participants are recruited and assessed at just one point in time.
One drawback of the cross-sectional approach is its inability to track individual development. Instead, researchers compare different age groups at a single point in time. To understand how development unfolds, researchers must infer the sequence of changes by piecing together data from these different age groups. Finally, with a cross-sectional design, you could still have cohort effects, especially in studies that use a larger range of age groups than in our example. In other words, people who are ten years old today have likely experienced all sorts of historical and environmental events that shaped them as individuals and that are different from the experiences of those who are now twenty and thirty years old. Those different events might be the source of any observed differences among the age groups.
Cross-Sequential Design
A cross-sequential design, sometimes called sequential design, combines the benefits of both cross-sectional and longitudinal designs. As in the cross-sectional design, groups of participants of different ages are recruited. However, instead of being assessed once, these participants are followed for a period of time, although usually much shorter than in a longitudinal study. For example, one child development study measured changes in the brain's executive functioning across middle childhood. To better understand how children's ability to use working memory and inhibition changes across development, the study began with three age groups (5, 7, & 9 year olds) and then had all the age groups continue to participate one year later (Rollins & Riggins, 2017). Of course, it’s still possible for practice effects to occur, but with care these can be minimized. Table 1.4 summarizes the effectiveness of different research designs for studying different topics common in developmental psychology.
Research Area | Longitudinal Design | Cross-sectional Design | Cross-sequential Design |
---|---|---|---|
Individual development | Strong | Weak | Moderate |
Development is normative for age | Strong | Strong | Strong |
Tracking from early life events to later life events | Strong | Weak | Moderate |
Change versus stability | Strong | Weak | Moderate |
Historical or cohort data | Weak | Strong | Moderate |
Ethics in Psychology Research
While many science-based disciplines are able to conduct research with materials or theoretical calculations (such as a chemist in a lab or a physicist planning a space trajectory), developmental psychology mostly relies on humans and other animals as research participants. Given that, special considerations enter into the ethical code of conduct for such research (APA, 2024). Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety.
Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB). The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members. The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the preceding principles mentioned in mind, and generally, approval from the IRB is required in order for the experiment to proceed.
Research Using Human Subjects
The Society for Research in Child Development (SRCD, 2021) provides a code of conduct for developmental research that includes the following principles and protections:
- Competence. Researchers should be appropriately trained and knowledgeable about their research participants’ cultural and social backgrounds and seek to eliminate any personal biases that might influence their research Participants donate their time, energy, and personal aspects of themselves when they agree to be studied. Researchers show participants respect and gratitude by acting with scientific integrity and interpersonal sensitivity.
- Informed consent. Obtaining informed consent is a two-part process.
- Participants need to be fully informed of the purpose of the study, what will be required of them, potential risks including any harm, and the anticipated benefits to science and themselves from the study.
- Participants must give permission. Adults can give consent only when they have been fully informed, and children do so only under the guidance of a parent or guardian. Although a child cannot give legal consent, they can give assent and express their desire to participate or not. For example, if a baby is happy and comfortable a study may continue with assent and consent. However, if a baby is crying and uncomfortable, the research should assume the child does not consent and end the study. A key part of consent is voluntariness. There should be no coercion, and participants should be free to end their participation in the study at any time. Researchers are ethically bound to ensure the participant knows this and will end the study if requested without coercion.
- Equity. Research should seek to identify and address developmental inequities and disparities wherever possible. Part of achieving this is being culturally sensitive and informed.
- Scientific integrity. All of the ethical standards operating in scientific investigation are also relevant and applicable to developmental research. Research findings, methodology, and data should be transparent and shared whenever possible for review by peers. Deception should be minimized, and participants should be debriefed (informed about the pertinent details) at the conclusion of the study. Participant information and data should be private.
- Balance of risk and benefit. Researchers should avoid harm, minimize risk, and weigh any risks against the possible benefits of conducting the research. They should assure participants of the confidentiality of all components of their participation.
- Dynamic assessment. If researchers uncover an unanticipated harm or other issue during the study, they should alter or discontinue the study.
These guidelines help ensure that a high degree of integrity is built into scientific research, with extra precautions taken to protect the personal integrity of participants. Developmental science research is difficult, meticulous work undertaken with great care and planning. The results provide benefit as we discover in ever more nuanced detail how humans grow and change and maintain stability over the course of the lifespan, a body of knowledge intended to help us understand one another more fully and better affect positive change in our social world.
Intersections and Contexts
Ethics and the Tuskegee Syphilis Study
Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, rural, Black men from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in Black men Figure 1.20. In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals who tested positive for syphilis were never informed that they had the disease.
While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of forty years, many of the participants unknowingly spread syphilis to their spouses and sexual partners (and subsequently children born from those relationships), and many participants eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and strict ethical guidelines for research on humans.
Research Using Animal Subjects
Many psychologists conduct research involving animal subjects. Often, these researchers use rodents (Figure 1.21) or birds as the subjects of their experiments—the APA estimates that 90 percent of all animal research in psychology uses these species (APA, n.d.). Because many basic processes in animals are sufficiently similar to those in humans, these animals are often considered acceptable substitutes for research that would be considered unethical in human participants.
This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.
Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC). An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.
Scientific Literacy
Knowing more about the scientific process, methods, and ethics can help you to become more informed about how to interpret information you may come across. It can also help you to learn more about how to apply science to thriving in your own life. Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges.
In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.
We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding early intervention programs. These programs are designed to help children who come from low-income backgrounds, have unique needs, or face other disadvantages. These programs may involve providing a wide variety of services to maximize the children’s development and position them for optimal levels of success in school and later in life (Blann, 2005). While such programs sound appealing, you would want to be sure that they also proved effective before investing additional money in them. Fortunately, psychologists and other scientists have conducted vast amounts of research on such programs and, in general, the programs are found to be effective (Neil & Christensen, 2009; Peters-Scheffer et al., 2011). While not all programs are equally effective, and the short-term effects of many such programs are more pronounced, there is reason to believe that many of these programs produce long-term benefits for participants (Barnett, 2011). If you are committed to being a good steward of taxpayer money, you would want to look at research. Which programs are most effective? What characteristics of these programs make them effective? Which programs promote the best outcomes? After examining the research, you would be best equipped to make decisions about which programs to fund.
Life Hacks
Evaluating Information Sources
Every day, we are flooded with information from a variety of sources. Your uncle sends you a link to a news story about the rise of violent crime in your city. A friend shares a social media post about a quick way to build muscle. A blog you follow claims inflation is the highest it’s ever been, while The New York Times says the economy has recovered and shows signs of strength. How do you evaluate these various claims? Which can you trust and act on? Since none of us are experts in all these fields, we rely on information shared by others daily. Are there strategies for efficiently doing so?
First, consider how people assess information. We’re more likely to believe messages from a social group we belong to, respect, or identify with (APA, 2023). Trustworthiness also grows through repetition. The more times we hear something, whether true or not, the more likely we are to believe it (which is how advertising works). We are also motivated to believe what we hear when aroused by fear or other heightened emotions. Taken together, these yardsticks raise the possibility of trusting false information.
One key strategy to assess the trustworthiness of information is to examine the source. Following are some general considerations (Kington et al., 2021):
- Is the information based on science? A community of scientists across hundreds of disciplines and around the world adhere to the principles of the scientific process. Scientific reports rely on the consensus and peer review of experts and are open to correction based on any new information that challenges it. xx
- Have the authors and/or publisher of the information acknowledged up front any political, financial, or even ideological interests they may have? If so, this disclosure helps you assess their trustworthiness. The “About Us” section on a website is a great way to learn this information.
- Is the information as applicable and accurate as it can be? In the United States and Canada, for example, legal, ethical, and practical considerations mean that information coming from *.gov and *.ca (respectively) are oftentimes trustworthy. Educational and non-profit institutions are another good source of information. These operate under public visibility and scrutiny, and their scholars are guided by scientific principles. Trustworthy sources in non-profit agencies include government think tanks, news organizations, professional associations of experts, foundations, and advisory panels to government agencies.
As scientists who study human change and growth across the lifespan, developmental researchers often have application as their primary goal when researching a topic. So, although developmental psychology examines answers to philosophical questions like “What does it mean to have a meaningful life?” ultimately, this is a field concerned with the practical. Developmental psychology is composed of many publications over several decades and from researchers across the globe. The theories and findings you learn about across this course often make use of a wide range of research methods, designs, and statistical analyses using many different groups of people. Researchers also use different tools before coming to a scientific consensus, including things like neurological measures (e.g., MRI, fMRI, & EEG), physiological measures (e.g., heart rate), observations of individuals or families (e.g., behavioral), and psychological measures (e.g., surveys and self-reports). By putting all of these together we can begin to form a more complete picture of human life.
One practical example of developmental psychology’s contribution to society is the shift away from corporal punishment of children to promote healthier parent- child relationships (Gershoff, 2002). This study used a meta-analysis, a way of combining many different research studies to test a hypothesis, to demonstrate that corporal punishment is not beneficial to children. In this course, you will learn about many historical findings, current studies, and topics that need further research as developmental psychologists strive to make contributions that impact our everyday lives.
References
American Cancer Society. (n.d.). History of the cancer prevention studies. http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study
American Psychological Association. (2023, November). Using psychological science to understand and fight health misinformation. https://www.apa.org/pubs/reports/health-misinformation
American Psychological Association. (2024). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code
Anderson, J. W., Hanna, T. J., Peng, X., & Kryscio, R. J. (2000). Whole grain foods and heart disease risk. Journal of the American College of Nutrition, 19(sup3), 291S–299S. https://doi.org/10.1080/07315724.2000.10718963
Barnett, W. S. (2011). Effectiveness of early educational intervention. Science, 333(6045), 975–978. https://doi.org/10.1126/science.1204534
Bem, S. L. (1981). Gender schema theory: A cognitive account of sex typing. Psychological Review, 88(4), 354–364. https://doi.org/10.1037/0033-295X.88.4.354
Blann, L. E. (2005). Early intervention for children and families: With special needs. MCN: The American Journal of Maternal/Child Nursing, 30(4), 263–267. https://doi.org/10.1097/00005721-200507000-00011
CAP Lab. (2024). Cognition, Affect, and Psychophysiology Lab (The CAP Lab). https://support.psyc.vt.edu/labs/caplab
Condry, J. C., & Ross, D. F. (1985). Sex and aggression: The influence of gender label on the perception of aggression in children. Child Development, 56(1), 225–233. https://doi.org/10.2307/1130189
Dickmeis, A., & Roe, K. (2019). Genres matter: Video games as predictors of physical aggression among adolescents. Communications, 44(1), 105–129. https://doi.org/ 10.1515/commun-2018-2011
Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). Psychology Press. https://doi.org/10.4324/9780203720615
Gershoff, E. T. (2002). Corporal punishment by parents and associated child behaviors and experiences: A meta-analytic and theoretical review. Psychological Bulletin, 128(4), 539–579. https://doi.org/10.1037/0033-2909.128.4.539
Jenkins, J. S. (2001). The Mozart effect. Journal of the Royal Society of Medicine, 94(4), 170–172. https://doi.org/10.1177/014107680109400404
Kington, R. S., Arnesen, S., Chou, W. S., Curry, S. J., Lazer, D., & Villarruel, A. M. (2021). Identifying credible sources of health information in social media: Principles and attributes. NAM Perspectives, 2021. https://doi.org/10.31478/202107a
Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf
Neil, A. L., & Christensen, H. (2009). Efficacy and effectiveness of school-based prevention and early intervention programs for anxiety. Clinical psychology review, 29(3), 208–215. https://doi.org/10.1016/j.cpr.2009.01.002
Peters-Scheffer, N., Didden, R., Korzilius, H., & Sturmey, P. (2011). A meta-analytic study on the effectiveness of comprehensive ABA-based early intervention programs for children with autism spectrum disorders. Research in Autism Spectrum Disorders, 5(1), 60–69. https://doi.org/10.1016/j.rasd.2010.03.011
Pietschnig, J., Voracek, M., & Formann, A. K. (2010). Mozart effect–Shmozart effect: A meta-analysis. Intelligence, 38(3), 314–323. https://doi.org/10.1016/j.intell.2010.03.001
Rauscher, F. H. (1993). Music and spatial task performance. Nature 365, 611. https://doi.org/10.1038/365611a0
Rauscher, F. H., Shaw, G. L., & Ky, C. N. (1995). Listening to Mozart enhances spatial-temporal reasoning: Towards a neurophysiological basis. Neuroscience Letters 185(1), 44–47. https://doi.org/10.1016/0304-3940(94)11221-4
Rollins, L., & Riggins, T. (2017). Cohort-Sequential study of conflict inhibition during middle childhood. International Journal of Behavioral Development, 41(6), 663–669. https://doi.org/10.1177/0165025416656413
Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. https://doi.org/10.1037/0033-2909.97.2.286
Sacks, O. (1985). The man who mistook his wife for a hat and other clinical tales. Summit Books.
Sacks, O. (2007). A neurologists notebook: The abyss, music and amnesia. The New Yorker. http://www.newyorker.com/reporting/2007/09/24/070924fa_fact_sacks?currentPage=all
Society for Research on Child Development. (2021, March 28). Ethical principles and standards for developmental scientists. https://www.srcd.org/about-us/ethical-principles-and-standards-developmental-scientists