A first-in-kind AI management system designed to enable the safe, secure, and responsible use of artificial intelligence (AI) across society has been launched by BSI.

The system has been introduced following research showing 61 percent wanted global guidelines for the technology.

The international standard, BS ISO/IEC 42001, is intended to assist organisations in responsibly using AI, addressing considerations like non-transparent automatic decision making, the utilisation of machine learning instead of human-coded logic for system design, and continuous learning.

Published in the UK by BSI as the UK’s National Standards Body, the guidance sets out how to establish, implement, maintain, and continually improve an AI management system, with a focus on safeguards.

It is an impact-based framework that provides requirements to facilitate context-based AI risk assessments, with detail on risk treatments and controls for internal and external AI products and services. It aims to help organisations introduce a quality-centric culture and responsibly play their part in the design, development, and provision of AI-enabled products and services that can benefit them and society as a whole.

The publication is prominently referenced in the UK Government’s National AI Strategy as a step towards guardrails that ensure AI’s safe, ethical, and responsible use.

Susan Taylor Martin, BSI CEO, said: “AI is a transformational technology. For it to be a powerful force for good, trust is critical.

“The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology, which in turn offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”

BSI’s recent Trust in AI Poll of 10,000 adults across nine countries found three-fifths globally and the same proportion in the UK wanted international guidelines to enable the safe use of AI.

Nearly 38 percent globally already use AI every day at work, while more than 62 percent expect their industries to do so by 2030. The research found that closing ‘AI confidence gap’ and building trust in the technology is key to powering its benefits for society and planet.

Scott Steedman, Director General, Standards at BSI, added: “AI technologies are being widely used by organizations in the UK despite the lack of an established regulatory framework. While government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them.

“In this fast-moving space, BSI is pleased to announce publication of the latest international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services.

“Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI. Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy. The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

BSI represented UK interests in the development of the standard as a participating member in ISO/IEC JTC 1/SC 42.

The standard is available to download on the BSI website.

Assistive technology specialist Envision recently unveiled the upcoming release of the 2.5 version of its Envision Glasses, which will enable people who are blind or visually impaired to have a richer understanding of their surroundings.

AT TODAY UPDATES
Over 7,000 healthcare professionals stay informed about the latest assistive technology with AT Today. Do you?
We respect your privacy