What you'll learn
This course teaches you a systematic approach to algorithmic risk and fundamental rights impact assessments, which is a necessary skill for practitioners in the space of emerging technology. After finishing this course students will be able to:
- Identify the socio-technical components of an algorithmic system that are relevant for risk analysis
- Produce a narrative of these components (a "CIDA" narrative) as a form of algorithmic transparency
- Identify important stakeholders
- List engagement strategies for relevant stakeholders to determine their salient interests, fundamental rights, and identify potential harms due to the algorithmic system
- Decide which components of the algorithmic system can serve as metrics for risk analysis
- Develop initial assessment strategies for these metrics
Who is this course for?
This course is developed for AI governance, risk, and compliance professionals needing to perform AI risk or impact assessments within companies using or developing high-risk AI systems. This kind of training is important for companies trying to conform with:
- The EU AI Act
- Digital Services Act
- ISO 42001
- Auditors for ISO 42001 (see ISO 42006)
- NIST AI RMF
Other groups that would find this course and certification useful:
- Consultants in AI Ethics and Governance
- Procurement specialists who are concerned about risks due to AI vendors
- Employees at VC firms that want to incorporate AI risks into their due diligence process
This course is part of a 5-course certification program for AI and Algorithm Auditors.
About your instructor
Dr. Shea Brown, CEO and Founder of BABL AI: Shea is an internationally recognized leader in AI and algorithm auditing, bias in machine learning, and AI governance. He has testified and advised on numerous AI regulations in the US and EU. He is a Fellow at ForHumanity, a non-profit working to set standards for algorithm auditing and organizational governance of artificial intelligence. He is also a founding member of the International Association of Algorithmic Auditors, a community of practice that aims to advance and organize the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Responsible AI. He has a PhD in Astrophysics from the University of Minnesota and is currently a faculty member in the Department of Physics & Astronomy at the University of Iowa, where he has been recognized for his teaching excellence from the College of Liberal Arts & Sciences.