News & Analysis

AI Governance – EU Takes Pole Position

The laws around AI-led software that could end up being the foundation for other countries

Barely days after India asked its technology companies to get official sign-offs before launching any new GenAI model, the European Union has once again taken the lead in coming up with broad rules that would govern the AI-led software business. Small wonder that regional lawmakers are calling it the “world’s first comprehensive AI law.” 

The European Parliament voted last week to adopt the AI Act that sets up a risk-based framework for artificial intelligence and applies various rules and requirements depending on the level of risk attached to the use case. The lawmakers overwhelmingly backed the provisional agreement reached after talks last December. 

Can India benefit from the EU laws?

The new law has the backing of all 27 ambassadors of EU member states and would become law once it gets a final round of approvals from the European Council. Thereafter, it will take about 20 days to come into force once it gets published in the EU’s Official Journal. The first subset around prohibited use-cases would bite after six months with others applying after 12, 24 and 36 months with full implementation expected by mid-2027. 

The formalization of rules in the EU comes at a time when India has given conflicting signals around how it perceives the entire GenAI revolution. Having first welcomed it with open arms (as seen here), IT companies were left a tad confused when the junior IT minister shared an advisory around untested AI platforms while asking for its compliance. 

[Also Read: What’s beyond the hype around GenAI?]

Of course, the immediate cause for this advisory was all too obvious and revolved around a fiasco by Google’s Gemini that described India’s Prime Minister in unsavory terms. Of course, the upcoming general elections and the possibility of bias or discrimination affecting its integrity was another factor. 

However, once the elections are through and a new government assumes charge early in June, India’s administrators and lawmakers could well turn their attention to the EU laws that are quite exhaustive. Especially with reference to non-compliance penalties where an entity could pay up to 7% of its global annual turnover for violating a ban. 

The fundamental basis of the regulation is global 

The lawmakers were quite clear that the fundamental societal values should be attached to the concept of artificial intelligence so that future AI moves only in a human-centric fashion. In other words, humans are in control of technology which helps people leverage new discoveries, economic growth, societal progress and unlocks human potential. 

Among use cases that are facing a total man include biometric systems that could infer private data around race, religion, sexual orientation and political beliefs; social scoring to screen out individuals from jobs; and systems attempting to recognize employee or student emotions as well as tools seeking to manipulate people’s behavior. 

Additionally, the law defines AI in education or employment or remote biometrics as “high risk” applications. Developers need to register such systems and comply with risk and quality management provisions set by the new law. 

This approach leaves most AI apps outside its purview, given that they’re low risk. However, the legislation has transparency obligations around another subset that includes AI chatbots, GenAI tools capable of creating deepfakes etc. General purpose AI models also face added regulation if classified under “systemic risk”. 

The last of these risk factors was added to the mix but watered down considerably after some member states led by France lobbied hard on why Europe should focus on scaling national champions in the field or face the threat of falling behind in the global AI race. Of course, one can obviously see the hand of French startups like Mistral in this shift. 

What next though?

Though such general purpose AI models aren’t completely kept out from the law, they only face limited transparency requirements. Only those models that use compute power greater than a specific benchmark would have to carry out risk assessments and mitigation on their models. Of course, the lawmakers didn’t seem to think that Mistral had lobbied and got what it wanted as they confirmed that their framework would ensure transparency. 

European lawmakers obviously felt pleased with the outcome after intense lobbying and then becoming the benchmark for other countries. They said the AI Act represents the start of the EU’s journey on AI governance and underscored that the rules will evolve and get extended with additional legislation in the future.