EU AI Act: What it is and how it can influence your AI strategy

EU AI Act: What it is and how it can influence your AI strategy

Updated - 13 March 2024

The European Union has created a groundbreaking legislative framework, the EU AI Act, to regulate the landscape of Artificial Intelligence (AI) development and deployment within its jurisdiction. The act, was approved by MEPs in the 13th of March 2024, is the world's first legislation on AI, offering a comprehensive and transparent set of rules for AI systems. For businesses operating within the EU or handling EU citizens' data, understanding and adapting to these new compliance requirements is not just a necessity but an opportunity to distinguish their AI-driven initiatives through ethical and responsible AI practices.

Understanding the EU AI Act's risk-based approach 

The essence of the EU AI Act is its risk-based approach to regulation, categorising AI applications into four distinct levels of risk: unacceptable, high, limited, and minimal. This classification determines the extent of regulatory scrutiny and obligations, aiming to foster innovation while ensuring safety, transparency, and respect for fundamental rights.

EU AI Act - risk approach

Unacceptable Risk: A Red Line for AI

At the top of the risk pyramid, certain AI practices are deemed unacceptable due to their clear threat to safety, livelihoods, and rights. These include manipulative systems that subvert free will and social scoring mechanisms that could lead to discrimination. The act unequivocally bans such practices, setting a firm boundary against deploying AI systems that could harm societal values and individual freedoms. Banned are also systems that exploit certain vulnerable groups (physical or mental disability) and 'real-time' biometric identification systems in public spaces for law enforcement purposes, except in a limited number of cases.

High Risk: Rigorous Scrutiny for Safety and Rights

High-risk AI systems will undergo stringent assessment and compliance requirements, particularly those impacting critical infrastructure, education, employment, and other essential services. These systems must be registered in an EU-wide database prior to deployment, ensuring they meet standards for data quality, transparency, and human oversight. This category underscores the need for a balanced approach that allows AI to flourish in sensitive areas while safeguarding public interest and individual rights.

Limited and Minimal Risk: Encouraging Innovation with Oversight

For AI applications posing limited risk, such as chatbots or emotion recognition systems, the act mandates minimal transparency obligations to inform users of their interactions with AI. This ensures that while innovation progresses, it does so with consumer trust and safety in mind. Meanwhile, AI systems classified under minimal risk can be developed and used freely, promoting innovation with voluntary adherence to high-risk standards encouraged through codes of conduct.

The Strategic Implications for Businesses

The EU AI Act presents both challenges and opportunities for businesses. Adhering to its regulations requires a thorough understanding of the act's provisions and a proactive approach to compliance. For high-risk AI applications, businesses must implement robust risk management frameworks, ensuring their AI systems are transparent, equitable, and under human oversight. This includes rigorous assessment processes, detailed record-keeping, and adherence to strict data quality and cybersecurity standards.

However, beyond compliance, the act offers a chance to excel. By embracing ethical AI principles and going beyond the minimum requirements, companies can differentiate themselves in a competitive market. This involves not just meeting the legal standards but embedding responsible AI practices into the core of business strategies, thereby building trust and credibility with consumers and stakeholders.

Preparing for the EU AI Act: A Roadmap for Compliance and Innovation

To safely navigate the complexities of the EU AI Act, businesses should consider the following steps:

  1. Conduct an AI Audit: Evaluate your current AI systems against the act's requirements to identify areas needing adjustment or enhancement.
  2. Develop a Risk Management Strategy: For high-risk applications, establish comprehensive frameworks to monitor, report, and mitigate AI-related risks.
  3. Invest in Training: Ensure your team is knowledgeable about the act and its implications, fostering a culture of compliance and ethical AI use.
  4. Consult Experts: Engage with AI ethics and compliance professionals to navigate the regulatory landscape effectively and stay ahead of potential changes.
  5. Promote Transparency and Accountability: Implement clear policies for AI decision-making processes, maintaining detailed records to demonstrate compliance.
  6. Stay Informed: Keep abreast of regulatory updates and be prepared to adapt your AI strategies as the regulatory environment evolves.

Do you experience regulatory, ethical or technical challenges? Download our guide to get clear, jargon-free and actionable insights to mitigate risks.

The Ultimate GenAI Guide

Leveraging Compliance for Competitive Advantage

Developing proprietary AI Systems for compliance and security

Businesses should consider the strategic development of their own AI products and systems to leverage compliance for competitive advantage. By building AI solutions in-house, companies gain full control over the compliance and security aspects, ensuring that every component of the AI system adheres to the EU AI Act from the ground up. This approach not only facilitates a deeper integration of ethical AI principles and compliance measures but also enables businesses to tailor AI functionalities precisely to their operational needs and strategic goals. 

Moreover, proprietary AI systems eliminate dependencies on third-party solutions, which may not fully align with the specific compliance and security standards required by the act. By investing in developing custom AI or GenAI solutions, businesses can ensure a higher level of data protection, system security, and regulatory compliance, thereby enhancing trust among users and stakeholders. Additionally, owning the intellectual property of AI systems can provide a unique value proposition, setting a business apart in a crowded market where consumers and partners increasingly value transparency, security, and compliance.

Ethical AI as a Brand Differentiator

Businesses that embed ethical AI principles into their operations can use this as a powerful brand differentiator. By promoting the use of transparent, fair, and accountable AI systems, companies can build a reputation for trustworthiness and social responsibility, appealing to increasingly privacy-conscious consumers.

Innovation within Regulatory Frameworks

The EU AI Act's risk-based approach encourages innovation by imposing fewer restrictions on low- and minimal-risk AI applications. Businesses should leverage this flexibility to experiment with and develop innovative AI solutions that can be brought to market more quickly while still ensuring that higher-risk applications are developed responsibly and in compliance with the act.

Engage with Stakeholders

Use the EU AI Act as an opportunity to engage with stakeholders, including customers, employees, and regulators. Educating these groups about your AI systems, how they comply with the act, and the benefits they offer can help demystify AI and build broader support for your AI initiatives.

Invest in AI Literacy and Skills

Building AI literacy across your organisation is crucial for navigating the complexities of AI compliance and innovation. This includes training for technical staff on the specifics of AI development and compliance and broader education for all employees on the ethical and societal implications of AI. Such investments can enhance your team's ability to innovate responsibly and comply with regulatory requirements.

The UK's Stance on AI Regulation

Mirroring the EU's objectives, the UK aims to strike a harmonious balance between fostering innovation and guaranteeing the responsible deployment of AI technologies. Unlike the EU, however, the UK opts for a less stringent framework, eschewing comprehensive AI-specific legislation to adapt its current legal and regulatory structures to accommodate AI advancements. This adaptation extends to the refinement of data protection statutes and the introduction of AI-focused ethical standards.

Such a flexible regulatory environment might simplify the incorporation of AI technologies, yet it necessitates a diligent approach to manoeuvring through the established legal terrain.

Next Steps

The EU AI Act will become effective 20 days after being published in the Official Journal. Then, it will be completely in effect 24 months later, apart from a few exceptions: bans on forbidden practices will start 6 months from the effective date; codes of practice will begin 9 months after it takes effect; rules for general-purpose AI, including management, will start 12 months after; and requirements for high-risk systems will kick in 36 months after it becomes effective.

Final Thoughts

The EU AI Act is set to redefine the boundaries of AI development and use within the European Union, balancing the scales between innovation and ethical responsibility. For businesses, this represents a pivotal moment to align their AI strategies with the new regulatory landscape, ensuring compliance while seizing the opportunity to lead in the development of ethical, transparent, and trustworthy AI systems. By embracing the principles of the EU AI Act, companies can not only navigate the complexities of compliance but also enhance their competitive edge in the AI market.

Getting Started with End-to-End AI Transformation

Partner with Calls9, a leading Generative AI agency, through our AI Fast Lane programme, designed to identify where AI will give you a strategic advantage and help you rapidly build AI solutions in your organisation. As an AI specialist, we are here to facilitate the development of your AI strategy and solutions within your organisation, guiding you every step of the way:

  • Audit your existing AI capabilities
  • Create your Generative AI strategy
  • Identify Generative AI use cases
  • Build and deploy Generative AI solutions
  • Testing and continuous improvement

Learn more and book a free AI Consultation