European Union lawmakers recently gave final approval to the Artificial Intelligence Act, or “AI Act,” in a 71-8 vote with 7 abstentions. The AI Act is the first of its kind, and could influence other governments’ efforts to regulate AI technology by setting an example of what such regulations look like.
The European AI act takes a “risk-based approach” to products or services that use artificial intelligence, which means that AI systems will face greater regulatory scrutiny in higher risk applications. According to Associated Press reporting, this means that AI tools such as content recommendation systems will be less heavily regulated and subjected to voluntary codes of conduct, while the use of AI in medical devices or critical infrastructure will have to meet far stricter requirements. Some AI uses have been banned outright – such as social scoring systems and some forms of face-scanning for biometric identification by police in public.
With respect to generative AI systems (such as ChatGPT), developers will have to:
- Provide detailed summaries of any text, video, or other data that is used to train their AI systems;
- Comply with EU copyright law in their use of training data;
- Label all generated pictures or videos of existing people, which are sometimes called “deepfakes,” as being artificially manipulated;
- Assess and mitigate the risks of their systems, and report any malfunctions that cause someone’s death or serious harm to health or property;
- Put cybersecurity measures in place; and,
- Disclose how much energy their models use.
The AI Act is expected to officially become law within 2-3 months, but will not be in full force until mid-2026. When it comes to enforcement, “…each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they’ve been the victim of a violation of the rules… [and] violations of the AI Act could draw fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue.”
Tech companies (the most prominent of which are situated in the U.S.) have generally supported AI regulations that would promote the safe use of AI and temper its negative effects. However, they’ve also lobbied to ensure that their investments in the commercial use and development of AI are well protected. The most significant example of this was when Sam Altman, the CEO of OpenAI, testified in favor of AI regulation in front of the U.S. Senate last May. Although President Biden has since signed an executive order on AI and legislation is expected to follow, the US remains far behind the EU when it comes to the regulation of AI.
The AI Act will primarily affect European AI startups – but when it comes to foreign AI companies that merely operate in Europe, it’s yet to be seen whether they will come into compliance, or choose to abandon Europe entirely.