Other sectors

The new ISO/IEC 42001 standard released will increase trust in AI

AI chatbot usage and concepts

Aimed at helping businesses and organizations develop a robust artificial intelligence (AI) governance framework, ISO/IEC 42001 on AI management systems was published today 18 December 2023.

Increasing adoption of artificial intelligence (AI) by product and service providers has also seen a growing mistrust of the new technology. ISO/IEC 42001 is the first international management system standard for safe and reliable development and implementation of AI.

Welcoming the new standard, Thomas Douglas, Global ICT Industry Manager in DNV said, “There is a clear and definite need for a standard around AI technology and its applications. It can truly help manage risks and ensure effective, safe and ethical application that fosters trust amongst organisations and users. We are pleased that it is now finally here and excited about how it will help organisations advance.

Organisations are keen to grasp the opportunities offered by AI. But there is the need for caution to address stakeholder concerns and meet emerging risks and regulations. To address some of the concerns around AI, governments around the globe are introducing or planning laws and regulations governing its use, including the forthcoming EU AI Act.

The new standard provides requirements for a certifiable AI management system framework that will enable organizations to gain maximum benefits whilst simultaneously reassure stakeholders that systems have been developed and are being managed responsibly.

Taking a similar approach to ISO 9001 on quality management and ISO 27001 on information security, for example, ISO/IEC 42001 provides best-practices, rules, definitions and guidance to manage risks and operational aspects.

The objectives of ISO/IEC 42001 can be summarised as:

  • Promoting the development and use of AI systems that are trustworthy, transparent and accountable.
  • Emphasizing ethical principles and values such as fairness, non-discrimination and respect for privacy when deploying AI systems so as to meet stakeholder expectations.
  • Helping organizations identify and mitigate risks related to AI implementation which in turn improves efficiency and reduces cost.
  • Maintaining regulatory compliance including data protection requirements.
  • Building greater confidence in AI management by encouraging organizations to prioritise human well-being, safety and user experience in AI design and deployment.

“AI technology holds such great promise, but as with any innovation it must be implemented responsibly to be effective. If you are a hospital, for example, you want to save lives, right? As a global management system certification body, DNV is eager to be working with customers help them create trust in their application of AI,” says Thomas Douglas.