KEYNOTE: Trustworthy AI - Obligation or Entrepreneurial Opportunity?
Artificial intelligence (AI) is penetrating more and more areas of the economy and society, taking on increasingly responsible tasks. It is clear that the potential of AI can only be fully exploited if its use is technically reliable and if there is sufficient trust in the respective technologies. In general, it is to be expected that the requirements for the trustworthiness of AI systems will be shaped both by legal regulation (high-risk areas) and by the demands of the market. For example, AI-based automation can generate cost savings and competitive advantages. However, this is only true as long as the underlying AI works reliably and can identify uncertain predictions itself.
This talk will first provide an overview of requirements for the trustworthiness of AI systems, addressing recently published results of the AI standardization roadmap and the planned EU regulation. In a next step, the question of how AI risks can be systematically evaluated and mitigated will be highlighted. Methods for the technical validation of AI systems are presented, as well as examples of new tools with which technical quality properties of AI systems can be evaluated. Finally, practical examples are used to present the procedure for an AI assessment and the associated benefits for the assessed organizations.