What you'll learn
This course teaches you a systematic approach to algorithmic risk and ethical impact assessments, which is a necessary skill for practitioners in the space of emerging technology. After finishing this course students will be able to:
- Identify the socio-technical components of an algorithmic system that are relevant for risk analysis
- Produce a narrative of these components (a "CIDA" narrative) as a form of algorithmic transparency
- Identify important stakeholders
- List engagement strategies for relevant stakeholders to determine their salient interests, rights, and identify potential harms due to the algorithmic system
- Decide which components of the algorithmic system can serve as metrics for risk analysis
- Develop initial assessment strategies for these metrics
Who is this course for?
This course was developed for current and new BABL employees and contract consultants to be able to work with clients on AI Governance, Ethical Impact Assessments, and Algorithm Auditing, but it is also suitable for:
- Consultants in AI Ethics and Governance
- Risk professionals wishing to incorporate algorithmic risks
- Procurement specialists who are concerned about risks due to AI vendors
- Employees at VC firms that want to incorporate AI risks into their due diligence process
This course is part of a 5-course certification program for AI and Algorithm Auditors. Anyone can take the course and get a certificate of completion, but only staff or contract Algorithm Auditors working with BABL AI will obtain a certification after an exam and exit interview.