The EU’s Draft AI Regulations: A Thoughtful But Imperfect Start

The EU’s proposed rules for AI may not be perfect, but they’re just the start of a more impactful discourse on algorithmic fairness and a safer relationship with artificial intelligence

Catriona Campbell
5 min readAug 13, 2021

In April this year, the EC published its much-anticipated Draft AI Regulations. Although the proposed rules haven’t yet been adopted (a process that could take until 2026), they’re the first-ever solid endeavour to regulate artificial intelligence. I recently discussed the draft regulations with fellow members of the newly-formed Scottish AI Alliance Leadership Circle. As it’s a complex topic, my colleague Callum Sinclair, Head of Technology & Commercial at law firm Burness Paull, provided us with his view of the draft regulations — a great summary of some key points:

  • A risk-based approach, distinguishing between unacceptable (and thus banned), high, limited, and minimal risks, with proportionate regulatory interventions in each instance.
  • Likely to have a scope of application much broader than the EU, applying to every company that does business with EU citizens.
  • Rely on Member States for enforcement.
  • Place technology providers under a variety of obligations, including high-quality data sets, user information, technical documentation, human oversight, and much more.
  • Could lead to substantial fines for non-compliance, up to €30 million or 6% of turnover for major breaches.

Click here for a more detailed rundown of the key provisions.

Asked about my thoughts on business during the session, three areas immediately sprung to mind: parallels with GDPR legislation, internal controls, and talent. First, GDPR started as a draft legislation and is now a global standard — I think that AI could follow this model, which would be fantastic. Watch this space!

Second, all companies need effective internal controls, which becomes critical as companies scale. With the added complexities AI will bring, companies need time to implement the changes so they adhere to the legislation and manage their internal controls.

Third, for organisations to properly adhere to the rules, they’ll need an incredibly resilient workforce that understands this stuff. Business leaders will therefore need to look harder to find and retain the best talent possible, which is a huge task. Again, they’ll need plenty of time to make sure they can deliver on the promise. For this reason, I actually welcome a longer adoption period. And great news for those looking to carve out a career in the field.

During the session, it was also great to hear leading academics share what the proposed new rules mean to them. This included Michael Rovatsos, Professor of AI at the University of Edinburgh (amongst other things), who made a fascinating observation: companies outside the traditional technology sector will now be responsible for validating their own products containing AI elements from suppliers — this will create the need for suppliers, end users, and government to work far more closely. Michael added that this could be an opportunity to showcase great community relationships and democratisation of decision-making.

Feedback on the regulations from commentators has been mixed. The main problem for most critics comes in the form of sweeping exemptions for law enforcement using remote biometric surveillance to, say, prevent terrorist attacks or search for missing people. They worry this will lead to widespread misuse of such systems.

Another serious issue for many is that the regulations don’t guarantee support for the people they’re designed to protect. For instance, there’s no provision to make it mandatory to inform people when their personal details have been processed by an algorithm, such as during the employment process. For this reason, one worry is the rules will fall short of meaningfully preventing AI systems from discriminating against typically marginalised groups.

Despite some creases in an overall great plan, which I have faith the EU will iron out in the years before Member States adopt the rules, I think it’s about time we saw a major stride forward like this. The Draft AI Regulations represent the dawn of a new age, one in which there’s a more impactful discourse around artificial intelligence. In a world where there’s a lot of talk about taking action (or more often than not, a lot of talk about talking about taking action), it’s important that we do, at some point, graduate from that talk and actually take said action.

But not everyone agrees this is the right move. In fact, some are dead set against it. It’s no major shock that the loudest sounds of protest come from the technology providers, big and small, whose AI activities the rules will limit to varying degrees. For example, perhaps two of the biggest innovations are the requirement for assessments to ensure high-risk AI systems conform to the rules before they’re sold or implemented and the requirement for a monitoring system to spot and solve issues once products are on the market.

Silicon Valley, at the forefront of AI development to date, has long been frank about its thoughts on the regulation of AI: lawmakers shouldn’t stand in the way of technological progress. I think it’s safe to say that, if regulators succumbed to such rhetoric, we’d end up in a world where we implement AI systems without, for example, rigorous processes to stop algorithmic bias. As we know from the recent UK Post Office scandal, blindly trusting the system can lead to massive problems and even more public distrust.

However, companies can’t complain — not really. In the grand scheme of things, the Draft AI Regulations give the key players in the industry very little to keep them awake at night, even though unregulated activities give us proponents of AI regulation many a sleepless night! The rules do, however, offer the public some peace of mind — enough to sleep well until they’re tweaked to perfection.

--

--

Catriona Campbell

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com