By the end of this section, you will be able to:
- Describe the role of codes of ethics within business and technology.
- Assess how much responsibility corporations should take for social, economic, and environmental problems.
- Evaluate the difficulty of establishing ethical practices pertaining to emerging technologies.
Ethical questions pertaining to business and to emerging technology raise a number of broad issues, including corporate responsibility and the potential dangers of artificial intelligence. Additionally, a great deal of work in these subfields supports the development and implementation of codes of ethics used by organizations to guide the conduct of their members. This section explores both these broader issues and the practical concerns.
Codes of Ethics
A business is defined as an organization that engages in selling goods and services with the intent to make a profit. Governments generally restrict the activities of businesses through laws and regulations. To ensure that their members act in accordance with these laws and regulations and to meet additional goals that reflect the values of the societies in which they operate, businesses often create a code of ethics. These codes outline what actions are and are not permissible for an organization and for its individual employees. They address concrete matters, such as bribery, discrimination, and whistleblowing, while also laying out guidelines for how to accomplish environmental and social goals and how to build and maintain trust and goodwill.
Businesses are not the sole entities, however, that issue such codes of ethics. Professional organizations serving specific groups, such as nurses and teachers, also issue these codes, and members must study them and commit to abide by them in order to be qualified as members of these professional organizations. Within the fields of science and technology, for example, the Institute of Electrical and Electronic Engineers Computer Society (IEEE-CS) provides a wealth of resources for computer science and engineering professionals, including education, certification, research, and career and solutions centers. In 2000, the IEEE-CS adopted the Software Engineering Code of Ethics and Professional Practice, which defines the ethical obligations of software engineers. These obligations include a commitment to approve software only if it meets certain specifications and passes appropriate tests, is deemed safe, and does not threaten to diminish the quality of human life, impinge on privacy, or harm the environment (IEEE-CS/ACM Joint Task Force 2001). Determining what would constitute outcomes such as diminishing the quality of life or impinging on privacy ties these concrete codes of ethics to larger questions that involve normative moral theories and political debate.
Businesses range from small family-owned organizations to large corporations. Governments often allow for businesses to classify themselves as one or more legal entities, each of which must fulfill specific legal requirements. Corporations are considered to be single entities distinct from the individuals who compose them. Early in the modern era in the West, a business was understood to be a collection of individuals who could be held responsible if something went wrong. Historians of business trace the birth of the modern corporation to the Dutch East India Trading Company, founded in 1602. As noted, modern corporations are legal entities understood to be separate from the individuals who work there. This definition allows individuals to engage in business practices without necessarily bearing the legal consequences of the business’s actions. Instead, the business entities are held accountable and usually punished with financial penalties.
The status of corporations is a hotly debated topic in the United States, with many arguing that the rights of corporations have expanded in inappropriate ways in recent decades. For example, the Supreme Court of the United States recently ruled that companies can contribute to political elections and that some for-profit corporations may refuse on religious grounds to cover birth control in their employee health plans (Totenberg 2014). Some argue that these legal rights challenge or threaten other ethical expectations acknowledged in contemporary US society. We can rationally ask whether the legal rights of corporations also imply that these entities have moral responsibilities. Moreover, to whom are corporations morally responsible: shareholders, employees, customers, or the community?
Interests of Shareholders and Stakeholders
In 1970, Milton Friedman published a now-famous essay in the New York Times in which he argues that businesses have a moral responsibility to increase profits (Friedman 1970). Friedman makes the case that all individuals acting on behalf of a firm have an obligation to make decisions that will result in the increase of a business’s profits and thus the profits of shareholders. He argued that employees that make decisions on behalf of a company are obligated to take whatever actions will maximize profits. From Friedman’s perspective, it is the responsibility of government to impose regulations that rein in businesses, which should be motivated only by a desire to benefit themselves, so that they don’t act in ways that cause harm to society.
A company, Friedman argued, is owned by shareholders, who have a right to the maximum return possible on their investment. Shareholders, also referred to as stockholders, are individuals who own a share of a corporation. Shareholders invest capital and receive a positive return on their investment when a company is profitable. Friedman’s position favors the interests of the shareholders. Stakeholders, in contrast, are any individuals who have a stake in a business’s operations. Stakeholders include but are not limited to employees, customers, shareholders, communities, and the like. So while the term shareholders refers to a relatively narrow group of individuals who have invested capital and own a portion of a given corporation, the term stakeholders refers to a much wider group and includes individuals who have not simply invested money but who are affected by the business’s operations.
Some argue for the view of shareholder primacy—that a firm’s managers ought to act solely for the interests of shareholders—based on deontological grounds. Such positions appeal to the concept of duty to justify an obligation to promote the interests of shareholders. In this view, shareholders invest capital and own (a portion of) a company, and executives are tasked with running the firm in the shareholders’ best interests. In contrast to shareholder primacy, stakeholder theory argues that “managers should seek to ‘balance’ the interests of all stakeholders, where a stakeholder is anyone who has a ‘stake,’ or interest (including a financial interest), in the firm” (Moriarty 2021). While shareholder theory asserts that the principal obligation is to increase the wealth of shareholders, stakeholder theory differs insofar as it advocates using corporate revenue in the interests of all stakeholders.
Safety and Liability
Today, corporations in the United States are held to standards of workplace safety established by the Occupational Safety and Health Administration (OSHA), created in 1971. Such government regulation of corporations is relatively new. After the Industrial Revolution, which began in the mid-18th century, manufacturing created new work models based on production efficiency, some of which created hazards for workers. Early classical economists like Adam Smith (1723–1790) advocated for a laissez-faire, or “hands off,” approach to business, in which there was minimal interference on the part of government in the activities of companies or manufacturing firms (Smith 2009). Once the Industrial Revolution was well established, workers in factories were expected to labor for long hours with few breaks, in very dangerous conditions. They received little pay, and children were commonly part of the workforce. While philosophers like Karl Marx and Friedrich Engels called for a revolutionary change—to replace the capitalist economic system with a communistic system—others called for political reforms (Marx and Engels 2002). Little by little, laws were passed to protect workers, beginning with the 1833 Factory Act in the United Kingdom (UK Parliament n.d.).
More recent legislation affords employees the right to lodge confidential complaints against their employer. Complaints may point to hazards in the workplace, work-related illnesses, or anything else that endangers employee health and safety. If concerns are verified, the company must correct these violations or face fines from the government. Cutting costs in manufacturing processes, while it theoretically should increase shareholder profits, can be dangerous to both employees and the public and ultimately harm a company’s long-term profits. For example, consider the Firestone/Ford tire controversy at the turn of the 21st century. An investigation into unusually high rates of tire failure, which resulted in thousands of accidents and 271 fatalities worldwide, brought forth multiple lawsuits and a congressional investigation in the United States. These were Firestone tires on Ford vehicles. Millions of tires were recalled, costing Firestone and Ford billions of dollars. Consequently, a number of executives at both companies resigned or were fired (Jones 2000).
Modern multinational corporations are entities that operate throughout the world, the largest employing over a million people. The relationship between corporations and their employees is an important area of focus in business ethics. Analyzing the moral obligations that corporations have toward their employees is more important than ever as large firms continue to gain power and control within the market.
We spend a significant part of our lives at work. The experience of working is one that most people are familiar with. The Scottish moral philosopher Adam Smith (1723–90), famously expressed concern with the trend he observed toward increased specialization in work in order to improve efficiency and increase production. While good for production and profits, Smith observed that specialization made work repetitive, mindless, and mechanical (Smith 2009). Smith worried that such work was harmful because it wasn’t meaningful in the sense that it didn’t require skill, offered workers no opportunities to make choices, and was highly repetitive and uninteresting. While Smith expressed concern about the lack of meaningful work, he did not believe businesses have an obligation to provide it.
Unlike Smith, later philosophers such as Norman Bowie have argued “that one of the moral obligations of the firm is to provide meaningful work for employees” (Bowie 1998, 1083). Applying a Kantian perspective, Bowie develops a robust concept of meaningful work based on the belief that people must always be treated as ends in themselves. To treat people as ends means respecting them as rational agents capable of freely directing their own lives. He argues that to treat a person as anything other than an end is to strip them of their moral status. Bowie characterizes meaningful work as work that (1) a worker freely chooses, (2) pays enough for a worker to satisfy their basic needs, (3) provides workers opportunities to exercise their autonomy and independence, (4) fosters rational development, (5) supports moral development, and (6) does not interfere with a worker’s pursuit of happiness. As Bowie sees it, meaningful work recognizes the important role work plays in a person’s development. It is through work that we develop our ability to act autonomously and live independently (Bowie 1998). Importantly, when workers earn a living wage, they acquire the means to be independent, live their own lives, and pursue their idea of a happy life. When workers are not paid a living wage, they are not treated as human beings deserving of respect. We see this, for instance, in the United States, where some workers who are employed full time by large corporations earn so little that they qualify for government assistance programs. In such cases, Bowie believes that workers cannot be truly independent because they do not earn enough to cover their basic needs.
Fair Treatment of Workers in an Age of Globalization
In some countries, labor laws are minimal or nonexistent, and workers may face the same level of danger that factory workers experienced in the West in the 19th century. Often such operations supply goods for US companies and a Western market. During the 20th century, most US corporations relocated their manufacturing overseas in order to save money. These savings were passed on to consumers as cheaper goods but also resulted in large-scale job loss for American workers and the economic decline of many US cities and towns (Correnti 2013). Outsourced labor has also been accused of exploiting workers in other countries, where government regulation and protection may not even exist. On the one hand, if there is no law to violate, some may argue that corporations are not doing anything wrong. Moreover, people working in these factories are paid a wage that may be more than they can earn any other way. Nonetheless, most would acknowledge that there must be some standard of morality and fair employment practices, even when the government does not provide it. Regardless of where labor is procured, it carries dilemmas regarding balancing just treatment of workers with company profits.
Equity through Affirmative Action
Affirmative action refers to taking positive steps “to increase the representation of women and minorities in areas of employment, education, and culture from which they have been historically excluded” (Fullinwider 2018). The goal of increasing representation of underrepresented and historically excluded groups is understood to be desirable not simply to increase diversity but also to provide examples that affirm possibilities for those in underrepresented and marginalized groups. Affirmative action has never mandated “quotas” but instead has used training programs, outreach efforts, and other positive steps to make the workplace more diverse. The goal has been to encourage companies to actively recruit underrepresented groups. In application processes (e.g., for employment or college admissions), affirmative action sometimes entails giving preference to certain individuals based on race, ethnicity, or gender. Such preferential selection has been the driver of much of the controversy surrounding the morality of affirmative action.
Critics of affirmative action argue that it encourages universities to admit or companies to hire applicants for reasons other than their merit. If preference is given to individuals based on race, ethnicity, or gender, then admissions and employment become not about what a person has done and shown they can do but about factors unrelated to performance. The concern is that we unfairly preference less qualified individuals over those who are more qualified simply to achieve greater diversity and representation. This raises an important question about the purpose of the application process. Is the goal of having individuals compete through an application process to ensure that a university or business is able to select only the best candidates, or is it to promote social goals like the representation of underrepresented groups?
Some argue that employers who hire or promote based on qualifications, regardless of race or gender, are doing the right thing and that specifically seeking members of a particular race or gender for a position challenges the institution’s own success and competitiveness. An institution’s ability to compete and succeed depends on the quality of its workforce. Instead of focusing on the hiring or application process, we should instead focus on ensuring that individuals from underrepresented groups are able to be competitive on their own merit. Another potential problem concerning preferential selection is that individuals from groups that have historically been excluded may be viewed as less qualified even when they were admitted or hired solely based on their own merit and achievements. In other words, affirmative action may inadvertently make it harder for qualified and competitive individuals from underrepresented groups to be taken seriously or to fulfill their responsibilities.
Contemporary American philosophers have provided various supports for affirmative action practices. James Rachels (1941–2004) argued that giving preference based on race is justifiable because White people have enjoyed privileges that have generally made it easier for them to achieve. While so-called reverse discrimination may harm some White people, Rachels thought by and large it was a positive practice that helped groups who have historically faced discrimination. Judith Jarvis Thomson (1929–2020) similarly “endorsed job preferences for women and African-Americans as a form of redress for their past exclusion from the academy and the workplace” (Fullinwider 2018). Mary Anne Warren (1945–2010) similarly argued in favor of preferences as a way to make the admission and hiring process fair. As Warren saw it, “in a context of entrenched gender discrimination,” such preferences could very well “improve the ‘overall fairness’” of the process (Fullinwider 2018).
Ethics and Emerging Technologies
Almost everyone in the contemporary world uses technologies such as cell phones and computers, but few of us understand how these devices work. This ignorance hampers our ability to make informed decisions as a society regarding how to use technology fairly or judiciously. A further challenge is that the pace of technological evolution is much faster than the human ability to respond at societal level.
Artificial intelligence (AI), originally a feature of science fiction, is in widespread use today. Current examples of AI include self-driving cars and quantum computers. Philosophers and engineers sort AI into two categories: strong and weak. Strong artificial intelligence refers to machines that perform multiple cognitive tasks like humans but at a very rapid pace (machine speed). Weak artificial intelligence refers to artificial intelligence that performs primarily one task, such as Apple’s Siri or social media bots. Philosophers of mind such as John Searle (b. 1932) argue that truly strong artificial intelligence doesn’t exist, since even the most sophisticated technology does not possess intentionality the way a human being does. As such, no computer could have anything like a mind or consciousness.
Despite Searle’s assessment, many people—including leaders within the field of computer science—take the threat of AI seriously. In a Pew Research Center survey, industry leaders expressed common concerns over exposure of individuals to cybercrime and cyberwarfare; infringement on individual privacy; the misuse of massive amounts of data for profit or other unscrupulous aims; the diminishing of the technical, cognitive, and social skills that humans require to survive; and job loss (Anderson and Rainie 2018). These concerns may reflect a deeper problem—what Swedish philosopher Nick Bostrom (b. 1973) calls a mismatch between “our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world.” Although leaders express more immediate concerns reflected in the Pew report, Bostrom’s fundamental worry—like those expressed in science fiction literature—is the emergence of a superintelligent machine that does not align with human values and safety (Bostrom 2014).