By the end of this section, you will be able to:
- Define applied epistemology.
- Describe the social aspect of knowledge and justification.
- Describe standpoint epistemology.
- Identify examples of epistemic injustice.
Applied epistemology, like other areas of applied philosophy, takes the tools of philosophy and applies them to areas of practical concern. Specifically, it applies philosophical methods and theories particular to epistemology to current social issues and practices. Applied epistemology often approaches epistemological questions on a collective or systems level. When looking at systems, applied epistemology investigates whether the systems of investigation (like those in the sciences) are structured in the best way to lead to true beliefs. When applied to collectives, applied epistemology examines whether and how groups of people conduct deliberation that leads to reliably true and justified beliefs. The groups focused on can range from small groups, such as a jury, to large collectives, such as a democracy.
The traditional epistemology that most of this chapter has covered is singularly focused on individuals. Theories are focused on what a person can know or when a subject is justified. For the most part, gaining knowledge is often treated as an individual effort. Social epistemology instead investigates how groups pursue knowledge and justification and how an individual can best seek justification and knowledge in a social world. Social epistemology takes seriously the fact that humans are, by and large, social animals that rely on others for belief formation. Because humans are social creatures, we rely on others for much of what we come to know. Our dependence on others for true beliefs eases knowledge acquisition, but it also complicates the task due to concerns regarding the reliability of others.
How much of your knowledge was gained strictly from independent investigation conducted only by yourself? Very little, most likely. We rely on other humans from the past and present for a very large proportion of our knowledge. Scientific endeavors consist of amending and adding to the work of others over the course of centuries. The propositional knowledge learned in school is gained through layers upon layers of individuals trusting the testimony of others—students trusting the testimony of teachers, teachers trusting the testimony of books, the writers of the books trusting the testimony of sources, and so on. The news we view, the books we read, the conversations we overhear—all of these are social means of gaining knowledge.
Social means of gaining knowledge are called testimony. Any time you believe something because you read it or heard it somewhere, you believe based on testimony. Of course, people are not always reliable. People sometimes use poor reasoning, misremember, or even lie. Hence, testimony is also sometimes unreliable. And this raises the question, When is testimony justified?
Testimony is clearly of importance to social epistemology. In determining whether to believe what others tell us, we ask whether they are trustworthy. A trustworthy source of testimony is honest, unbiased, rational, well-informed, and clearheaded. We further look for an expert or authority. An expert or authority is a person whose experience, education, and knowledge in an area make them more reliable. Questions surrounding testimony are questions about justification. When are we justified in believing others? Who are we justified in believing in particular situations? When and how does testimony give us justification for a belief? And what do we do when the testimony of others conflicts with our already held beliefs?
When the testimony of another contradicts your own belief, what should you do? In cases where the other person is an expert and you are not, then the testimony ought to weaken your confidence in your belief. You should either change your belief or withhold from believing either way until you can get further justification. But what should you do when the person is not an expert but an epistemic peer? An epistemic peer is a person who is in an equal epistemic position relative to some domain—that is, they have the same cognitive ability, evidence, and background knowledge in that domain. A person can be an epistemic peer with respect to one domain but not another. You may know that you are on level epistemic ground with regard to the subject of baseball with your best friend but that they are an authority compared to you on the subject of baking.
Social epistemologists theorize about how peer disagreement ought to function in justification and belief. Some theorists argue that you should always modify your conviction in some way in the face of peer disagreement, though they disagree about exactly how you ought to modify your view. Others maintain that peer disagreement does not always give you reason to think you are mistaken (Frances and Matheson 2018).
When assessing the testimony of a person you believe is an epistemic peer, ask yourself the following questions:
- Does the person supplying the testimony have a history of lying?
- Is this person known to have biases that might distort their perceptions?
- Does this person have a good track record?
- Does this person’s testimony conflict with testimony from others?
- What are this person’s motives?
When assessing the testimony of a purported authority on some subject, ask yourself the following questions:
- Is this a question on which there is expertise?
- Is the person supplying the testimony an expert in the relevant field?
- Is there a consensus among experts in the relevant field on the question at hand?
- Does this person’s testimony reflect agreement with the consensus of experts?
- Is there reason to think this person is biased?
So far, we have looked at how social factors influence an individual’s justification and beliefs. Social epistemology also investigates whether it is possible for groups to have beliefs. We often attribute beliefs to groups of people. We say things like “The United States believes in freedom,” “The Supreme Court holds that a right to privacy exists,” “Scientists believe in climate change,” and “The jury knew he was guilty.” When can it rightfully be said that a group believes something? One answer is that a group believes P only in cases in which all or almost all members of the group believe P. However, we do attribute beliefs to groups while not always assuming that every member holds the belief. The Supreme Court example above illustrates that not every member of a group must believe something for us to say that the group does. When the court decides an issue with a 6–3 vote, we still attribute belief to the court as a whole.
Another view is a commitment view. Group belief does not require that all members believe; rather, members of the group are jointly committed to a belief as a body merely by virtue of being members of that group (Goldman and O’Connor 2019). Group commitment to a belief creates a normative constraint on members of a group to emulate the belief. Commitment views may work for any group formed around allegiance to specific ideas. Take religious groups, for example, which coalesce around beliefs pertaining to God and religious dogma.
If groups are capable of beliefs, then clearly the question of justification of group belief is relevant. Note that some of the previous theories on epistemic justification are applicable to questions of group justification. Goldman focused on reliable processes. Social epistemology also focuses on the reliability of processes used in juries, democracies, and the sciences.
Social epistemology accounts for the social nature of knowledge and justification. The quality and extent of an individual’s knowledge depends heavily on the people that individual deems trustworthy. The same is the case for group or public knowledge (knowledge generally accepted as true by a collective). Individuals and perspectives granted expert status have more influence on what is accepted, but this means that many individuals and perspectives will be ignored. Furthermore, it is often types or groups of people who are excluded, which becomes problematic if the perspectives of those groups are valuable to the task of knowledge creation. Standpoint epistemology takes this worry seriously. Standpoint epistemology studies the relationship between an individual’s social status and that individual’s epistemic position. Of particular importance to the theory is the notion that the relative power of individuals and groups influences who we consider to be reliable sources, causing us to ignore the perspectives of less powerful groups. Furthermore, standpoint theory argues that the exclusion of entire groups harms the entire enterprise of gaining knowledge.
Take as an example the president of a large factory who wants to increase efficiency and cut down on waste. The president convenes all the department heads and managers to identify areas of inefficiency and waste; essentially, they want the perspectives of those individuals with more power within the factory. But if the president doesn’t elicit the opinion of any of the workers in the warehouse or on the factory floor, they miss out on potentially valuable perspectives. A manager may think they can adequately identify problems in the way that the manual work is done. But given the position of a factory worker—situated day after day on the factory floor—the factory worker has a unique perspective. Standpoint theorists hold that perspectives such as that of the factory floor worker are uniquely valuable and cannot be emulated by those not in that position.
Standpoint epistemology is applied to many areas of study. In the social sciences, where the goal is to describe social structures, behaviors, and relationships, standpoint theorists advocate for focusing on the perspectives of traditionally marginalized groups. If the general goal is to study how people do things, then it does not do any good to ignore the experiences of entire classes of people. And when the goal is to discover facts about power dynamics within social institutions, focusing only on privileged perspectives is woefully inadequate. If anthropologists in the 1950s wanted to understand racism and the unequal power structure in the American South, interviewing Black citizens would generate more insightful evidence than interviews with White citizens. Black Americans were in a better epistemic position compared to their White counterparts to describe the power structure. Similarly, women are in a better position to explain sexism within a workplace than their male counterparts. People who use wheelchairs are in a much better position to design a truly accessible bathroom. Examples such as these abound.
Standpoint epistemology also critiques the traditional hard sciences and medical research. Hard sciences, such as biology, chemistry, and physiology, are those that rely on controlled experiments, quantifiable data, and mathematical modeling. Hard sciences are generally noted for being exact, rigorous, and objective. Standpoint theorists question this objectivity and reveal how biases and perspectives of researchers can influence these supposedly objective fields. A clear example of this is early research on heart disease. Because medical researchers, who were mostly male, focused their studies on men, heart disease was considered a men’s disease. The symptoms of a heart attack that doctors and patients were warned to look out for did not include many symptoms that women experience when having a heart attack (Kourany 2009). Men most often experience chest pain, while women are more likely to experience symptoms such as jaw pain and nausea (American Heart Association n.d.). As a result, many women did not seek medical attention when experiencing heart problems, and doctors failed to properly diagnose them when they did seek medical treatment. Standpoint theory reveals not only that varied standpoints are valuable but also that specific standpoints often include implicit or explicit bias—not including women or people of color in data sets, only including particular variables in modeling, and so on.
If standpoint epistemology is correct in concluding that valuable perspectives are often excluded from social and scientific discourse, then this is an instance of epistemic injustice. Epistemic injustice is injustice related to epistemology. Epistemic injustices include the exclusion and silencing of perspectives, systematic misrepresentation of group or individual views, unfair conferring of expert status, and unjustified distrust of certain perspectives. British philosopher Miranda Fricker (b. 1966), who coined the term epistemic injustice, divides epistemic injustice into two categories: testimonial injustice and hermeneutical injustice (Fricker 2007). Testimonial injustice occurs when the opinions of individuals or groups are unfairly ignored or treated as untrustworthy. Hermeneutical injustice occurs when a society’s language and concepts cannot adequately capture the experience of people living within that society, which thereby limits understanding of their experiences.
Silencing and distrust of someone’s word often occurs by virtue of that individual’s membership in a marginalized group. Women, people of color, people with disabilities, low-income individuals, and religious minorities are all examples of marginalized groups. Take as an example a criminal trial. If the jury takes the testimony of a witness less seriously because of their perceived class status or membership in a particular group, this is an example of epistemic injustice, specifically testimonial injustice. Philosophers who focus on testimonial injustice utilize research to show how the voices of individuals and groups are unfairly ignored and discounted compared to others. For example, many studies over the past few decades have illustrated that reports of pain by Black patients are taken less seriously by medical professionals than similar pain reports by White patients. An outcome of this is that Black patients are given less pain medicine and pain management than White patients, even in cases where the patients had the same injury or surgery (Smedley, Stith, and Nelson 2003; Cintron and Morrison 2006). This is clearly a case of testimonial injustice: Black patients receive less care because their testimony (reporting pain) is not taken as seriously as the testimony of their White counterparts.
But testimonial injustice also occurs when someone’s opinions are systematically misrepresented. To misrepresent a view is to interpret that view in a way that does not align with the original intended meaning. As an example, consider the Black Lives Matter movement and a popular response to it. Black Lives Matter was formed in response to police brutality and racially motivated violence against Black people. The idea was to affirm the value of Black lives. However, a popular response to the movement was the phrase “All lives matter.” This response implies that the message of Black Lives Matter is really that only Black lives matter, which is an unfair and inaccurate representation of the view.
Hermeneutical injustice occurs when language and concepts cannot adequately capture an individual’s experience, resulting in a lack of understanding of that individual’s experience by both the individual and those around them. The classic example of hermeneutical injustice focuses on sexual harassment. Before the concept and phrase sexual harassment was introduced and understood by society, women had a difficult time describing certain experiences in the workplace. Women experienced unwanted attention and focus, exclusion, comments concerning their bodies and looks, and different treatment based on negative assumptions about their gender. Many women were fired for not going along with such treatment. But there was no word for their experience, so many women could not understand or explain their discomfort. Furthermore, accounts of their distressing experience ran the risk of not being taken seriously by others. The phrase sexual harassment was coined to fill a gap in the concepts used to explain and describe experience. Perhaps you have had the experience of being introduced to a word or concept that suddenly illuminated a part of your experience in a way that greatly increased your understanding of yourself and your ability to explain yourself to others.