Capturing the Benefits of Scalable AI through Responsible AI
14:50 - 15:10, 22nd of May (Wednesday) 2024/ INSPIRE STAGE
Responsible AI is more important than ever. However, implementing responsible AI values is easier said than done. Available guidelines and frameworks are often high-level and abstract leaving senior managers, product developers and compliance team members with more questions than answers. How can risk-owners measure and evaluate AI risks, to make an informed decision about the use of AI tools? What is needed is an end-to-end framework to implement AI responsibly across an organisation and provide ongoing monitoring and assurance.
Taking into consideration recent developments with the EU AI Act, as well as other national regulations, I will explain how organisations ought to implement the seven responsible AI principles outlined by the EC's High-Level Expert Group on AI to create polices and procedures for ongoing AI assurance activities, including performance management, bias audits, transparency, accountability, human oversight and monitoring and governance. This framework combines interdisciplinary expertise in law, ethics and data science to do the important work of translation between AI techniques and legal obligations and responsible best practices. The outcome will help organisations achieve innovation at scale with a pioneering framework to drive responsibility.