Decoding The Impact: How The EU AI Act Reshapes Business And Tech Landscape

As the EU AI Act takes weeks, or maybe months, to confirm, and the EU Parliament approves it for the final enforcement after the required adjustments, here are some of the key takeaways from the agreement.

Decoding The Impact: How The EU AI Act Reshapes Business And Tech Landscape

The European Commission first introduced the inaugural regulatory framework for AI within the EU in April 2021. As of December of 2023, after almost two and a half years of deliberation, lobbying, and intense negotiations, EU lawmakers have successfully concluded a comprehensive deal on the AI Act, marking it as the world’s first extensive legislation on AI.

It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology. Correct implementation will be key – the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models”, said co-rapporteur Brando Benifei (S&D, Italy), after the deal was finalised.

The primary objective of the AI Act is to address potential harm in critical areas where AI applications pose significant risks to fundamental rights, including healthcare, education, border surveillance, and public services. Notably, it also prohibits applications that present an “unacceptable risk.”

Under the AI Act, AI systems categorised as “high risk” will be subjected to stringent regulations mandating the implementation of risk-mitigation measures, high-quality datasets, comprehensive documentation, and human oversight. Conversely, AI applications falling outside this “high risk” classification, such as recommender systems and spam filters, will be exempt from these stringent requirements.

This legislation is particularly significant as it brings much-needed rules and enforcement mechanisms to an industry that, until now, has operated in a regulatory vacuum, resembling a digital Wild West. The AI Act represents a landmark effort to instill transparency, ethics, and accountability in the development and deployment of artificial intelligence technologies within the EU and serves as a precedent for global AI governance.

Here are some of the key takeaways from the EU AI Act:

EU is now the premier AI policeman

The AI Act establishes a European AI Office to oversee compliance, implementation, and enforcement. This office will be the world’s first to enforce binding rules on AI, positioning the EU as a global leader in tech regulation.

The governance structure includes a scientific panel of independent experts to address systemic risks posed by AI. Noncompliance fines range from 1.5% to 7% of a firm’s global sales turnover. EU citizens can launch complaints about AI systems, making Europe a pioneer in AI accountability.

Wiggle room for AI companies

The EU AI Act appears less stringent than anticipated. The law imposes some limits on foundation models but offers significant leeway to “open-source models” — those built with publicly modifiable code. This could advantage open-source AI firms in Europe that opposed the legislation, such as France’s Mistral and Germany’s Aleph Alpha, and even Meta, which introduced the open-source model LLaMA”, says Armand Ruiz, Director of AI at IBM, who has developed a GPT tool to assist in evaluating projects for compliance with the EU AI Act. Companies and businesses may assess whether they fall under the stricter rules laid down in the act.

AI companies must provide better documentation, comply with EU copyright law, and share information about the data used to train models. Stricter rules apply to the most powerful AI models, determined by the computing power needed to train them.

Binding Rules on Transparency and Ethics

The AI Act introduces significant and legally binding regulations on transparency and ethics in AI. It mandates that tech companies notify users when interacting with chatbots, biometric systems, or emotion recognition technologies. The Act requires the labeling of deepfakes and AI-generated content, ensuring that AI-generated media can be detected.

Organisations providing essential services, such as insurance and banking, must conduct impact assessments on how AI systems affect fundamental rights.

The EU AI Act intends to promote national security

The EU bans specific AI applications to safeguard fundamental rights. Prohibited uses include biometric categorization, untargeted scraping of facial images, emotion recognition at work or in schools, social scoring, and AI systems manipulating human behavior. Predictive policing is restricted unless accompanied by clear human assessment and objective facts.

However, the AI Act excludes military and defense AI systems. Police use of biometric identification in public places is regulated, requiring court approval for specific crimes.

So, what happens after the enforcement of the EU AI Act?

After technical adjustments, which might take weeks or months ideally, European countries and the EU Parliament would approve it for official enforcement. Tech companies, then, will have two years to implement the rules, with AI use bans taking into effect right after six months. On the other hand, the companies developing foundation models must comply within one year of the Act’s enforcement.