The European Union’s artificial intelligence law, known as the EU AI law, has been described by the European Commission as the “world’s first comprehensive AI law.” A few years later, it is gradually becoming a part of reality for the 450 million people living in 27 EU countries.
But EU AI law is more than a European issue. It applies to both local and foreign companies and can affect both AI systems providers and deployments. The European Commission cites examples of how it applies to developers of CV screening tools and banks purchasing them. Currently, all these parties have a legal framework that sets stages of AI use.
Why does EU AI law exist?
As always in EU law, EU AI law exists to ensure that there is a unified legal framework that applies to certain topics across EU countries. This time, the topic is AI. With regulations in place, we need to ensure “free movement of AI-based goods and services, cross-borders” without forking local restrictions.
With timely regulations, the EU is trying to create equal arenas across the region and promote trust. However, the general framework it employed is not precisely tolerant. Despite the relatively early stages of widespread adoption of AI in most sectors, the EU AI Act sets a high standard that AI should not be broader for what it should and for society.
What is the purpose of the EU AI law?
According to European lawmakers, the main goal of the framework is to “promote the incorporation of central and reliable human AI, as encompassed by the European Union’s Charter of Fundamental Rights, including democracy, legal regulations, and environmental protection, and encompass the high level of protection of health, safety and fundamental rights, while encompassing the harmful effects of AI systems.
Yes, it’s pretty mouthful, but worth a careful analysis. First, it depends on how we define “human-centered” and “trustworthy” AI. Secondly, it gives a sense of unstable and well-balanced to maintain between innovation and harm prevention, and AI and environmental consuming, innovation and harm prevention. As always in EU law, once again, the devil will be in detail.
How does EU AI ACT balance the balance between different goals?
To balance the potential benefits of AI with harm prevention, the EU AI Act adopted a risk-based approach. Flagging a set of “high risk” requires strict regulations. Apply light obligations to “limited risk” scenarios.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Has the EU AI Act come into effect?
Yes, no. The EU AI Act rollout began on August 1, 2024, but will only take effect through a series of incredible compliance deadlines. In most cases, it will be applied faster to new entrants than to companies already offering AI products and services in the EU.
The initial deadline came into effect on February 2, 2025, focusing on enforcing the ban on the banned use of a small number of AI, including the undeleted scraping of the Internet or CCTV of facial images to build or expand the database. Many others will continue, but most regulations will apply by mid-2026 unless the schedule changes.
What changed on August 2, 2025?
Since August 2, 2025, the EU AI Act has been applied to the “systemic AI model.”
GPAI (general AI) models are AI models trained on a large amount of data and can be used for a wide range of tasks. According to the EU AI Act, GPAI models can be systematic risks. For example, through reduced barriers to the development of chemical or biological weapons, or through unintended issues of control of autonomous (GPAI) models. ”
Ahead of the deadline, the EU has published guidelines for providers of GPAI models. This includes both European companies and non-European players such as humanity, Google, Meta and Openai. However, these companies already have models in the market, so unlike new entrants, they must comply until August 2, 2027.
Does the EU AI method have teeth?
The EU AI Act has penalties that lawmakers are simultaneously “effective, proportional and dissatisfied.”
Details are set by EU countries, but regulations set the overall spirit – their penalties vary depending on what is considered a risk level and the thresholds at each level. Prohibited AI applications breach “results in the highest penalty of up to 35 million euros or 7% (either) of global annual revenue for the previous fiscal year.
The European Commission can also fine GPAI model providers up to 15 million euros or annual sales.
How fast do existing players intend to follow?
Voluntary GPAI code of practice, including commitments such as not training models on pirated content, is a good indicator of how a company will be involved in framework law until it is forced.
In July 2025, Meta announced it would not sign a voluntary GPAI code of practice to help such providers comply with EU AI laws. But Google confirmed that it would sign despite the booking.
Previous signers include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, Openai, and more. However, as we saw in the Google example, the signatures are not comparable to full support.
Why are (some) tech companies fighting these rules?
While in a blog post, Google stated that it would sign a voluntary GPAI code of practice, Kent Walker, president of Global Affairs, still had reservations. “We are concerned that AI law and code will slow the development and deployment of AI in Europe,” he writes.
Meta is more radical, and its chief global affairs officer Joel Kaplan said in a LinkedIn post: “Europe is on the wrong path of AI.” Calling the EU implementation of AI laws “overreach,” he said the code of practice was “a measure that has introduced a lot of legal uncertainty in model developers and far beyond the scope of AI laws.”
European companies have also expressed concern. Arthur Mensch, CEO of French AI champion Mistral AI, was part of a group of European CEOs who urged Brussels to “stop the clock” for two years before the key EU AI law obligations came into effect in July 2025.
Will the schedule be changed?
In early July 2025, the European Union responded negatively to lobbying for a suspension, saying it would stick to a timeline for the implementation of EU AI law. August 2, 2025, we will proceed to our scheduled deadline and update this story if anything changes.