The short course on the EU AI Act is designed to help developers and deployers of high-risk AI systems navigate the complex regulatory landscape and ensure compliance with the Act's requirements. The course is structured into several key topics, each focusing on different aspects of the regulation and its implications for AI development.
- Introduction to the EU AI Act: This provides an overview of the Act, its purpose, scope, and key definitions. Participants will gain an understanding of the regulatory framework and the importance of compliance for ensuring the safe and ethical deployment of AI systems.
- Understanding High-Risk AI Systems: Here participants will learn about the criteria for determining high-risk AI systems, including the areas of application and the risks associated with these systems. Real-world examples and case studies will be used to illustrate the identification process.
- Obligations for Developers of High-Risk AI Systems: This topic covers the specific obligations imposed on developers of high-risk AI systems, such as transparency, data governance, technical documentation, and human oversight. Participants will explore best practices for meeting these obligations and the role of quality management systems in ensuring compliance.
- Strategies for Implementing Requirements: Here we provide practical guidance on implementing the requirements of the EU AI Act. Participants will learn about risk management strategies, data quality and governance approaches, and techniques for maintaining transparency and human oversight in AI systems.
- Achieving Conformity and Obtaining a Conformity Assessment: Here we focus on the conformity assessment process, including the steps to achieve conformity with the Act, the involvement of notified bodies, and the maintenance of compliance over time. Participants will gain insights into the process of obtaining a conformity assessment and the importance of post-market monitoring.
Throughout the course, interactive elements such as quizzes, discussion prompts, and practical exercises will be used to enhance understanding and engagement. Supplementary materials like checklists, templates, and guidelines will be provided to help participants apply the concepts to their own AI systems.
By the end of the course, participants will have a comprehensive understanding of the EU AI Act and the tools and strategies needed to develop and deploy high-risk AI systems in compliance with the regulation.
About the Instructor
Dr. Shea Brown, CEO and Founder of BABL AI: Shea is an internationally recognized leader in AI and algorithm auditing, bias in machine learning, and AI governance. He has testified and advised on numerous AI regulations in the US and EU. He is a Fellow at ForHumanity, a non-profit working to set standards for algorithm auditing and organizational governance of artificial intelligence. He is also a founding member of the International Association of Algorithmic Auditors, a community of practice that aims to advance and organize the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Responsible AI. He has a PhD in Astrophysics from the University of Minnesota and is currently a faculty member in the Department of Physics & Astronomy at the University of Iowa, where he has been recognized for his teaching excellence from the College of Liberal Arts & Sciences.