Introducing our comprehensive certificate program designed for risk and compliance professionals who need to manage a quality management system in compliance with the EU AI Act. This program combines four essential courses to provide a deep dive into the critical aspects of AI risk management and regulatory compliance.

  1. EU AI Act Compliance for High-Risk AI Systems (Starts April 22nd): Gain a thorough understanding of the EU AI Act, focusing on the identification of high-risk AI systems, obligations for developers, strategies for implementing requirements, and achieving conformity through assessments.
  2. Algorithmic Risk & Impact Assessments: Learn to assess the risks and impacts of AI algorithms, ensuring that potential harms are identified and mitigated effectively. This course covers methodologies for conducting assessments and strategies for managing algorithmic risks.
  3. AI Governance & Risk Management: Explore the principles of AI governance and the best practices for managing risks associated with AI systems. This course provides insights into establishing robust governance frameworks and risk management strategies to ensure responsible AI deployment.
  4. Bias, Accuracy, & the Statistics of AI Testing: Delve into the critical issues of bias and accuracy in AI systems, understanding the statistical methods for testing and validating AI models. This course equips participants with the knowledge to address bias and ensure the accuracy of AI systems.

Upon completing this certificate program, participants will be equipped with the skills and knowledge to navigate the complexities of AI risk management and regulatory compliance. They will be prepared to implement and oversee quality management systems that adhere to the EU AI Act, ensuring the ethical and safe deployment of AI technologies. Join this program to become a proficient risk and compliance professional in the rapidly evolving field of AI.

About your instructor

Dr. Shea Brown, CEO and Founder of BABL AI: Shea is an internationally recognized leader in AI and algorithm auditing, bias in machine learning, and AI governance. He has testified and advised on numerous AI regulations in the US and EU. He is a Fellow at ForHumanity, a non-profit working to set standards for algorithm auditing and organizational governance of artificial intelligence. He is also a founding member of the International Association of Algorithmic Auditors, a community of practice that aims to advance and organize the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Responsible AI. He has a PhD in Astrophysics from the University of Minnesota and is currently a faculty member in the Department of Physics & Astronomy at the University of Iowa, where he has been recognized for his teaching excellence from the College of Liberal Arts & Sciences.

Choose a Pricing Option