The European Union (EU) has taken a significant leap forward in the regulation of artificial intelligence (AI) with the adoption of the AI Act by the European Parliament on June 14, 2023. This landmark legislation aims to address the potential risks associated with AI services and establish robust controls. Since its initial draft in April 2021, the AI Act has undergone several updates to create a comprehensive framework.
Stricter Regulation for AI Services:
The most recent draft of the AI Act, introduced in May 2023, focuses on controlling "foundational models" and introduces a tiered approach for categorizing AI models based on their risk levels. AI practices are classified as 'low and minimal risk,' 'limited risk,' 'high risk,' and 'unacceptable risk.' AI tools falling under the 'low and minimal risk' category will be exempt from regulation, while 'limited risk' tools will require transparency. 'High-risk' AI practices will face stringent regulation, including the establishment of a publicly accessible database documenting the deployment of general-purpose and high-risk AI systems within the EU.
This database must be freely accessible, easily understandable, machine-readable, and user-friendly, enabling the general public to search for specific high-risk systems, locations, risk categories, and keywords. AI models associated with 'unacceptable risk' will be outright banned. The rigorous nature of the AI Act positions it as the world's most stringent legislation on artificial intelligence, serving as a benchmark for future regulations.
Ashish Gangar, a senior Director specializing in data privacy, compliance and cybersecurity at Armoryze, praised the legislation's broad scope, particularly its focus on high-risk applications such as facial recognition technologies and profiling systems. Ashish believes that the EU AI Act will shape the future of AI legislation worldwide.
Similar to the General Data Protection Regulation (GDPR), the AI Act also imposes substantial fines for non-compliance, with penalties of up to €30 million ($32 million) or 6% of global profits.
Prioritizing Innovation over Regulation: While the EU takes decisive steps in AI regulation, the United Kingdom (UK) has adopted a pro-innovation approach. The UK government announced in March that it would not introduce new legislation or establish a dedicated regulatory body for AI. Instead, existing regulators in sectors implementing AI will oversee its application.
The UK's approach includes the launch of a Foundation Model Taskforce, backed by a £100 million ($125 million) investment, aimed at fostering the development of AI systems to boost the nation's GDP.
British Prime Minister Rishi Sunak announced that the UK will host the first global AI summit in the fall of 2023, demonstrating the country's commitment to AI advancements. Additionally, the UK government has secured agreements with prominent AI entities, including Google DeepMind, OpenAI, and Anthropic, granting access to their AI models for research and safety purposes.
Although the UK's focus remains on innovation and its status as the third global leader in AI, Lindy Cameron, CEO of the UK National Cyber Security Centre (NCSC), acknowledged the potential concerns surrounding AI regulation. During the Cyber 2023 conference at Chatham House, Cameron emphasized the need to understand the risks posed by generative AI, maximize its benefits within the cyber defense community, and disrupt adversaries' use of AI. While Cameron did not explicitly mention AI regulation, it remains to be seen whether the UK will reevaluate its light-touch approach in response to growing public concerns.
Canada's AI and Data Act:
In contrast to the UK's approach, Canada has introduced the AI and Data Act as part of the federal Bill C-27 for the Digital Charter Implementation. Limited details are available at this stage, but a companion paper published in March indicates that Canada will not outright ban automated decision-making tools, even in critical areas. Instead, the Canadian government aims to incentivize AI developers to implement measures that prevent harm. This includes the creation of mitigation plans to reduce risks and increase transparency when utilizing AI in high-risk systems.
The EU's groundbreaking AI Act marks a significant milestone in the regulation of artificial intelligence and the mitigation of associated risks. By introducing comprehensive measures and a tiered approach, the EU aims to ensure the transparent and responsible deployment of AI technologies. Meanwhile, the UK's pro-innovation approach and Canada's focus on risk mitigation through incentivization demonstrate the diverse strategies being adopted globally. As the world continues to grapple with the challenges and opportunities presented by AI, the AI Act will serve as a model for legislators in Europe and beyond, shaping the future of AI governance and ethics.
At Armoryze, we understand the complexities of AI compliance and the need for businesses to navigate these regulations effectively. Our managed compliance services provide comprehensive support in ensuring your AI systems align with the requirements outlined in the EU's AI Act and other relevant regulations. From risk assessment to policy development and ongoing compliance monitoring, our team of experts is here to guide you through the compliance journey.
Don't let AI compliance be a burden. Take action today and safeguard your business with Armoryze's managed compliance services. Contact us for a consultation and let us help you stay ahead in the ever-evolving landscape of AI regulation.