Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Foundations of Information Systems

12.3 Ethics of Artificial Intelligence Development and Machine Learning

Foundations of Information Systems12.3 Ethics of Artificial Intelligence Development and Machine Learning

Learning Objectives

By the end of this section, you will be able to:

  • Describe the purpose of ethical governance and regulations for developing and using AI and machine learning products
  • Discuss the impact machines using AI have on fairness, bias, transparency, and explainability

Artificial intelligence (AI) is a broad field that resembles human intelligence, including collecting information, understanding concepts, applying information, and making decisions. Machine learning is a subset of AI and refers to a specific technique that allows computers to learn from data. The ongoing development and growth of artificial intelligence and machine learning mean that leaders in the field must be guided by ethical principles and appropriate governance frameworks. Given the potentially significant impacts these technologies can have on society, individuals, and the environment, a comprehensive approach is needed to ensure they are harnessed responsibly. This includes multistakeholder collaboration that involves leaders of nations and organizations worldwide working together to address considerations around governance, fairness, bias, transparency, and explainability.

Ethical Governance and Regulations in Artificial Intelligence Systems and Products

The development and use of AI systems must be guided by clear accountability and responsibility frameworks to be ethical. Developers, deployers, and users of AI should be accountable for any adverse impacts resulting from flawed system design, limitations, or misuse, such as phishing or identity theft. Responsibility should be allocated across the AI value chain, from initial data collection and algorithm design to ongoing monitoring and maintenance. Legal regulations and industry standards help clarify where liability lies if harm does occur. For high-risk applications like self-driving cars or AI diagnostics, insurance may be warranted.

Another central ethical concern is protecting privacy and ensuring AI is secure from misuse or cyberattacks. As AI systems collect and analyze expansive datasets, robust data governance practices must safeguard personal information and prevent unauthorized access. Approaches to help mitigate private risks can include data minimization to limit data collection to information that is relevant and necessary, encryption to transform data into code, and access controls to regulate who has access to data. Ongoing security assessments of AI systems (review 5.1 The Importance of Network Security) will identify potential vulnerabilities to be addressed. Any data breaches or system compromises must be reported per breach notification laws.

To achieve these goals, maintaining meaningful human control and oversight over AI is critical. Humans—not fully autonomous systems—must remain ultimately responsible for high-stakes decisions. Artificial intelligence transparency, the ability to show that the outputs make sense, and results validation support human oversight. Humans may need to remain “in the loop” and check results when AI systems operate in real-time for critical use cases. Predefined constraints can also curb unfettered AI autonomy if human supervision is absent. The goal should be complementing human capabilities with AI, rather than replacing human discretion and authority.

In addition to oversight, AI systems must be transparent regarding their capabilities and limitations. Documentation, logging, and monitoring should provide visibility into system functionality. User interfaces should clearly convey when users are interacting with AI instead of a human being since this can be difficult to discern. Such transparency ensures appropriate trust in AI systems by aligning user expectations with actual performance. It also facilitates auditing algorithms for issues like bias or inaccuracies. Guidelines and frameworks have been introduced to provide standards for developing and managing autonomous systems. Examples are the IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems and the EU’s European Commission's standards presented in their Ethics Guidelines for Trustworthy AI.21

Ethics in IS

Ethical Use of Chatbots

Chatbots interact with users in increasingly humanlike ways. This raises ethical concerns, especially if the chatbots are not designed transparently. For example, chatbots may be used to gather individuals’ personal information, possibly violating their privacy. Chatbots can be manipulative, persuading users to make unwise decisions or purchases. Chatbots can also be biased, which may negatively impact how they interact with humans.

To help ensure that chatbots are used ethically, chatbots should identify themselves up front as AI, and not pretend to be human. They also should provide options to opt out, including the option of dealing with a human rather than a chatbot.

Another key governance issue is ensuring that AI systems are free from biases. Training data and algorithms must be continually vetted to avoid encoding social biases and prejudices into systems. Diversity among AI development teams also helps reduce bias. Regular algorithm audits and bias testing identify problems that must be addressed.

To understand how a lack of AI accountability can cause harm, consider predictive policing algorithms. These algorithms have included biases that disproportionately target minorities. One example is PredPol, a predictive policing software tool used by the Los Angeles Police Department. With inadequate human oversight of the data and methods used by its algorithms, the flawed logic of the tool took a while to uncover. Eventually, its built-in loops and inability to reduce crime led to the department terminating its use. Related criticism has led to rebranding by PredPol (now Geolitica) and similar policing tools to focus less on predicting criminal events and more on improving policing transparency and accountability.22

Alongside algorithmic bias, safety is another ethical imperative for AI and machine learning. Even if unintended, errors or limitations in complex AI systems carry risks of harm. Rigorous testing protocols are essential, especially for physical systems like autonomous vehicles or medical robots. Simulation environments allow for safe evaluation of hazardous scenarios. Fail-safes and human oversight provide additional protection and backup. Organizations that adopt an open, proactive approach toward safety will engender greater public trust.

Sustainability is another emerging area of focus in AI ethics. The exponential growth of AI workloads has significant environmental impacts from energy consumption to electronic waste. Approaches like energy-efficient model design, low-emission chipsets, and carbon offsetting help mitigate this.23 Artificial intelligence can also be explicitly leveraged for sustainability initiatives, such as mapping deforestation, making waste management more efficient, and predicting both weather events and climate disasters to help communities. 24

Effective governance requires translating ethical principles into action via organizational policies, legal regulations, and industry norms. Governments must develop laws and policies tailored to the ethical use of emerging technologies, balancing innovation and responsible oversight. Companies should enact internal controls aligning AI development and usage with ethics and human values. They must also comply with evolving regulations. Global coordination will become more critical to synthesize governance across jurisdictions.

Finally, civil society plays a crucial role in advocating for ethical AI. Organizations focused on digital rights, consumer protection, and social justice can help manifest public concern. They can also advise institutions on how to translate idealistic AI principles into concrete daily practices. Ongoing stakeholder dialogue and public engagement will ensure governance keeps pace with technological change.

Realizing the benefits of AI while mitigating risks necessitates holistic governance that integrates ethics throughout the technology life cycle. This requires foresight, responsibility, and coordination between stakeholders. If done comprehensively and with proper intention, AI can flourish in step with the enduring values of privacy, justice, autonomy, and human dignity.

Artificial Intelligence’s Impact on Fairness, Bias, Transparency, and Explainability

As AI systems grow increasingly powerful and ubiquitous, ensuring they align with principles of fairness, accountability, and transparency becomes imperative. Without proactive efforts, AI risks perpetuating harm by amplifying historical prejudices, concealing decision logic, and displacing human oversight.

One major area of concern is that AI systems may discriminate against certain groups of people based on gender, race, age, or other attributes. If the data used to train algorithms contain social biases, such as information that promotes gender or racial stereotypes, AI can further engrain discrimination. Ongoing testing using diverse datasets is essential to uncover hidden biases. A human-in-the-loop system, which involves human contributions and feedback, also allows monitoring outputs for evidence of unfairness. Other best practices include data anonymization, adversarial debiasing to ensure AI is not biased by training examples, and minority oversampling to ensure balanced classes and sample sizes help mitigate prejudice.25 Promoting diversity among AI development teams further helps uncover issues that need attention. Overall, reducing algorithmic bias is an ethical imperative for organizations deploying AI.

The need for transparency in how AI systems operate and make decisions is closely related. “Black box” models like neural networks can render decision logic opaque. However, documentation, logging, monitoring, and auditing capabilities can shed light on system functionality. User interfaces should clearly indicate when users interact with AI rather than humans. Such transparency fosters trust in AI’s actual capabilities. Openly conveying system limitations also reduces the risk of overreliance or misuse. Across all contexts, transparency principles foster ethical use of AI.

Similarly, explainability—being able to convey the rationale behind AI decisions clearly—is crucial. While certain techniques like linear models or decision trees have self-evident logic, complex neural networks can be inscrutable. To properly question, validate, and enhance AI, developers should incorporate explainability capabilities into the development process wherever feasible. This might involve using localized interpretation methods or approximating models with more easily understood ones. While full explainability may not always be possible, aiming for intelligibility in design still promotes accountability.

These concerns create the need for meaningful human oversight over AI systems, particularly of those systems making high-stakes decisions, such as medical diagnoses. As noted previously, there are concerns that AI could become uncontrollable if it is granted unchecked autonomy. As AI develops, human beings must therefore remain ultimately accountable by retaining the ability to audit decisions and override them as warranted. Human-in-the-loop systems are especially important for high-risk real-time applications. In addition, all AI systems should have clearly defined constraints aligned with ethics and legal compliance. Ongoing human evaluation, even if not real-time oversight, is necessary for responsibly developing and deploying AI.

Advancing AI transparency, explainability, and oversight raises technical challenges. Practices such as counterfactual testing and adversarial attacks can uncover limitations and biases of the AI models being used. But these practices require specialized expertise and added complexity. Through the use of extensive testing and validation procedures, emerging techniques like “Trustworthy AI” and “AI Safety” aim to make such capabilities intrinsic to system design, not afterthoughts.

Getting governance right also involves grappling with some of the gray areas where it can be trickier to determine the appropriate actions. Without adequate safeguards, transparency could potentially open systems to gaming or manipulation by giving access to hackers and others who misuse AI. Explainability methodologies have technical limitations and assumptions that may yield explanations that are not easily understood. Furthermore, human oversight risks incorrect rejection of valid AI decisions due to cognitive biases. Strategies accounting for such subtleties are critical; oversight should focus on human strengths like values alignment, which involves using a shared set of values and goals approved by stakeholders to guide policies and procedures, such as AI development. These types of holistic approaches foster accountable innovation.

Meaningful oversight extends beyond internal testing to external regulation and standards. Governments must keep pace with technological change and provide appropriate legal guidance for AI development and use. This may necessitate new data protection, algorithmic accountability, and AI safety regulations. Global coordination to harmonize AI governance across borders is also important. The nonprofit International Association of Privacy Professionals maintains a Global AI Law and Policy Tracker to identify AI governance legislations all over the world.26 They also sponsor the annual Global Privacy Summit to bring leaders from AI governance and privacy areas together. Industry leaders should collectively establish technical and ethical norms that go beyond the minimum legal requirement to help create responsible AI systems.

Careers in IS

AI Ethicist

An AI ethicist analyzes technological impacts and advocates for policies that align innovations with human values. AI ethicists are concerned with the various ethical facets of AI development and product implementation, including ethical guidance and standards. They review AI policies and procedures to ensure compliance with ethical requirements. They also identify risks and recommend changes as needed to address advancements in AI.

While still fairly new, AI ethicist positions can be found in any type of organization that uses AI in its operations, including businesses, governments, and nonprofit organizations. AI ethicists work with organizational and community leaders to advocate for responsible, ethical AI development and implementation. Aspiring AI ethicists need interdisciplinary skills in technology, ethics, law, and social sciences, which enable them to gain nuanced perspectives on challenges like algorithmic bias, transparency, and worker displacement. To prepare for these roles, interested students should pursue degrees in computer science, information technology, and related fields with an emphasis on ethics and social sciences.

Footnotes

  • 21“The IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems,” IEEE Standards Association, accessed January 13, 2025, https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/; “Ethics Guidelines for Trustworthy AI,” European Commission, last updated January 31, 2024, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  • 22Johana Bhuiyan, “LAPD Ended Predictive Policing Programs Amid Public Outcry. A New Effort Shares Many of Their Flaws,” The Guardian, November 8, 2021, https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
  • 23Paul Henderson, Jieru Hu, Joshua Romanoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau, “Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning,” Journal of Machine Learning Research, 21, no. 248 (2020): 1–43, https://www.jmlr.org/papers/volume21/20-312/20-312.pdf
  • 24Victoria Masterson, “9 Ways AI Is Helping Tackle Climate Change,” World Economic Forum, February 12, 2024, www.weforum.org/stories/2024/02/ai-combat-climate-change/
  • 25Anoop Krishnan and Ajita Rattani, “A Novel Approach for Bias Mitigation of Gender Classification Algorithms Using Consistency Regularization,” Image and Vision Computing, 137 (September 2023): 104793, https://doi.org/10.1016/j.imavis.2023.104793
  • 26“Global AI Law and Policy Tracker,” IAPP, last updated November 2024, https://iapp.org/resources/article/global-ai-legislation-tracker/
Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution-NonCommercial-ShareAlike License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/foundations-information-systems/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/foundations-information-systems/pages/1-introduction
Citation information

© Mar 11, 2025 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.