Algorithm auditing is a critical component of ensuring the safe and ethical development and deployment of artificial intelligence (AI) systems. As AI becomes more prevalent and integrated into various aspects of our lives, like in the rapid adoption of generative AI and large language models like ChatGPT, it is important to ensure that these systems are transparent, fair, and accountable.
Algorithm auditing involves examining algorithmic systems and their governance to ensure that ethical, safety, and compliance risks are sufficiently managed. By conducting independent audits, algorithm auditors can provide assurance to stakeholders that these systems are operating as intended, promote transparency and accountability, and further BABL’s mission of promoting human flourishing in the age of AI.
In order to fulfill this mission, algorithm auditors need a number of skills and capabilities, many of which fall outside the standard audit/assurance skillset. Current financial audit and assurance standards are critically important for maintaining independence and professional conduct, however, further capabilities are needed to deal with the complex sociotechnical systems in which modern AI and machine learning algorithms are embedded This is due to a) the rapid advance of the underlying technology, b) our rapidly evolving understanding of the ways these systems can affect society (both negatively and positively), and c) the fact that sufficient knowledge of (a) and (b) is needed to mitigate risk when evaluating algorithmic systems.
What this training is:
The objective of this training is to equip auditors with sufficient knowledge to identify relevant risks of using algorithmic systems, best practices for the governance of such systems, and the common techniques and workflows that are involved in modern AI/ML development. This knowledge will be used to assess the risk of material misstatement while evaluating audit documentation, under the supervision of a more experienced auditor, and is meant to lay the foundation for further on-the-job learning opportunities (either at BABL AI or other algorithm auditing firms).
What this training is not:
This training is not meant to encompass all knowledge and competencies needed to perform algorithm auditing in the absence of supervision or oversight from a more experienced auditor.
Who this training if for:
This program is developed for AI governance, risk, and compliance professionals needing to perform, e.g., AI risk or impact assessments within companies using or developing high-risk AI systems. This kind of training is important for companies trying to conform with:
- The EU AI Act
- Digital Services Act
- ISO 42001
- Auditors for ISO 42001 (see ISO 42006)
- NIST AI RMF
Other groups that would find this certification useful:
- Consultants in AI Ethics and Governance
- Procurement specialists who are concerned about risks due to AI vendors
- Employees at VC firms that want to incorporate AI risks into their due diligence process
- Algorithms, AI, & Machine Learning
- Algorithmic Risk & Impact Assessments
- AI Governance & Risk Management
- Bias, Accuracy, & the Statistics of AI Testing
- Algorithm Auditing & Assurance
- BONUS: Finding your place in AI ethics consulting