AI Ethics and the Role of IT Auditors

Jai Sisodia
Author: Jai Sisodia
Date Published: 6 December 2022

Humans have always lived by certain ethical norms that are driven by their communities. These norms are enforced by rules and regulations, societal influence and public interactions. But is the same true for artificial intelligence (AI)?

The objective of AI is to help increase the efficiency and effectiveness of daily tasks. However, AI does not have the ability to infer or understand information, as humans do. It uses significant amounts of computing power to derive insights from massive amounts of data that can challenge cognition.

With this in mind, consider what would happen if large quantities of biased data were fed to an AI system. That AI system could easily become a tool that exacerbates ethical issues, such as social inequalities, at scale. Therefore, it is critical that ethics be given careful consideration when designing, operating and auditing AI systems.

Examining AI Ethics

According to IBM’s AI Ethics in Action report, AI ethics is “[G]enerally recognized as a multi-disciplinary field of study that aims to optimize the beneficial impact of AI by prioritizing human agency and well-being while reducing the risks of adverse outcomes to all stakeholders.”1

AI can be a powerful tool for reducing the impact of bias, but it also has the potential to worsen the issue by deploying biases at scale and in sensitive areas.

The US National Institute of Standards and Technology (NIST) Special Publication (SP) 1270 identifies 3 categories of bias to which AI applications are prone:2

  1. Systemic bias—Results from the policies and procedures of organizations that operate in a manner that favors a certain social group more than others. When AI models are fed biased data based on characteristics such as gender or ethnicity, they are unable to perform their intended purpose effectively.
  2. Statistical and computational bias—Occurs when the sample is not representative of the population. These biases are generally found in data sets or the algorithms used for the development of AI applications.
  3. Human bias—Results from the systematic errors in human thinking. This bias is often dependent on human nature and tends to differ based on the individual’s or group’s perception of the information received. Examples of human bias include behavioral bias and interpretation bias.3

There are multiple methods by which these biases can be managed or mitigated. NIST SP 1270 recommends utilizing the 4 stages of the proposed AI life cycle model, which enable various stakeholders (e.g., developers, designers) to effectively identify and address AI biases.

The 4 stages include:

  1. Pre-design—The stage in which the planning, problem identification, research and identification of data sets are undertaken. This stage can help identify biases such as limited opinions, organizational and individual heuristics, or bias in the data lake selected for AI training. Controls such as well-defined policies and governance mechanisms should be taken into consideration during this stage.
  2. Design and development—This stage involves requirements and data set analysis based on which AI model is designed or identified. It is where most development activities happen and is, therefore, prone to multiple AI-related biases. Hence, the following controls are recommended:
    • Perform a compatibility analysis to identify potential sources of bias and plans to address them.
    • Implement a periodic bias assessment process in the development stage such as assessing AI algorithms or system input/output.
    • At the end of development, perform an exhaustive bias assessment to ensure that the application stays within the defined limits.
  3. Deployment—The stage in which the AI application is implemented in the production environment and starts processing live data. Adequate controls should be in place to ensure adequate functioning of the system (e.g., continuous monitoring of system behavior, robust policies and procedures).
  4. Testing and evaluation—This stage requires continuous testing and evaluation of all AI systems and components throughout all stages of the life cycle.

IT Auditors Can Mitigate AI Ethical Risk

IT auditors in today’s world are ill-equipped to handle the complexities of AI. There are no universally agreed upon standards for how to audit AI systems, which especially hinders the production and use of ethical AI. So, it is up to AI auditors to perform ethics-based AI auditing, which entails analyzing the basis for AI, the code for AI systems and the effects that the AI brings forth.4

So, what can IT auditors do to incorporate ethics-based AI auditing into their methods? There are several suggestions, including:

  • Analyze existing frameworks, regulations, processes, controls, and documentation that address various areas such as risk, compliance, privacy, information security, and governance.
  • Explain AI to stakeholders and communicate with them proactively to propose enhancements that address new and emerging ethical AI risk factors. Examples of enhancements are new committees, charters, processes or tools.
  • Develop a comprehensive AI risk assessment program that includes relevant procedures, processes, documentation, roles, responsibilities and protocols. This can be done in partnership with skilled and trained practitioners.
  • Seek out information about AI design and architecture through self-learning and trainings and by involving subject matter experts (SMEs) to determine the proper scope of impact.
  • Track global AI and data ethics developments such as legal, regulatory and policy changes to ensure that they are integrated with regular change management routines in the organization.

In addition, IT auditors should develop an understanding of ethics-based concepts and principles rather than merely considering AI from a technological standpoint.

IT auditors should develop an understanding of ethics-based concepts and principles rather than merely considering AI from a technological standpoint.

For example, there are several questions auditors should consider:

  • Have the following principles been adequately assessed throughout the development and use of AI?:
    • Fairness
    • Transparency
    • Accountability
    • Explanations
    • Interoperability
    • Inclusivity
  • Has adequate consideration been given to the privacy and human rights of the data subjects?
  • Has the adequacy of the unintended uses and misuses of AI applications been assessed?

One thing is certain: Enterprises cannot adopt AI without first addressing its ethical issues. IT auditors can play a significant role in the process by enhancing their skills and expanding their focus areas to not only consider technological risk, but also the ethical risk posed by AI systems. If unaddressed, ethical risk can have harmful societal outcomes such as causing disparate impact and discriminatory or unjust conditions.

Endnotes

1 Rossi, F.; B. Rudden; B. Goehring; AI Ethics in Action, IBM Institute for Business Value, 31 March 2022
2 Ibid.
3 Huppert, J. D.; “Cognitive Theory,” Comprehensive Clinical Psychology, vol. 6, 2022
4 Eliot, L.; “Auditing of AI Is Tricky, Especially When It Comes to Assessing AI Ethics Compliance, and Vexing Too for Auditing Those Autonomous Self-Driving Cars,” Forbes, 12 May 2022

Editor’s Note

Hear more about what the author has to say on this topic by listening to the ““AI Ethics and the Role of IT Auditors” episode of the ISACA® Podcast.

Jai Sisodia, CISA, AWS Certified Cloud Practitioner, CISSP, ITIL v3

Is an IT, cybersecurity, and privacy auditor for a global engineering, procurement, construction, and installation (EPCI) organization. He is responsible for leading and executing global audit and advisory engagements across several areas including enterprise resource planning (ERP) systems, cybersecurity, global data centers, cloud platforms, industrial control systems, IT networks, data privacy and third-party risk. He previously worked as an advisory consultant for a leading Big 4 consulting enterprise and a multinational healthcare organization. Sisodia is an ISACA® Journal article reviewer and actively contributes to the ISACA Journal and ISACA Now blog.