Are You Ready for the EU AI Act?

The EU’s AI Act is the most robust set of AI regulations so far in the world. What does that mean for your business?

Jason Albert, Global Chief Privacy Officer

June 12, 2024

5 Min Read
Robot on keyboard with EU flag and AI Act stamp
Bildagentur-online via Alamy Stock

We are in the early days of a new era of artificial intelligence (AI). As AI products and services increasingly come to market, policymakers around the world are grappling with how to realize the benefits and potential of this new technology while at the same time guarding individuals against potential risks. After all, ultimately, society determines the rules that govern the use of technology, to strike the right balance between innovation and protection.  

The European Union has passed the landmark Artificial Intelligence Act. Now finalized -- it’s the first comprehensive regulation on artificial intelligence adopted anywhere in the world. And we have seen in other areas -- notably privacy -- that the EU’s legislative approach has had significant influence elsewhere, although it is too early to say in this case whether policymakers elsewhere will see the Act as a model. The EU AI Act takes a horizontal approach, regulating AI whether it’s a standalone software offering or embodied in hardware, such as a self-driving car. It also takes a life-cycle approach, regulating aspects of AI from the quality of the data used to develop the service to testing for accuracy and bias to human oversight to deployment to post-market monitoring. Parties involved in the development, introduction, sale, distribution and utilization of AI systems all face obligations under the Act. 

Related:EU AI Act Clears Final Hurdle to Become Global Landmark

Because AI systems made available on the EU market will often also be sold elsewhere in the world, and because the EU AI Act will also apply where outputs of an AI system are used in the EU, a consistent global approach to implementing its requirements is advisable to ensure compliance.  

Key Elements of the EU AI Act

The EU AI Act addresses three key risk areas:

  • First, it bans certain uses of AI that are seen as posing unacceptable risks -- for example, real-time biometric identification by law enforcement in public settings. 

  • Second, it adopts a regulatory regime for so-called high-risk use cases -- those where the use of AI could impact the rights or significant opportunities available to individuals (e.g., in law enforcement, education and employment). 

  • Third, for foundational models such as large language models -- or General Purpose AI (GPAI) -- it imposes transparency obligations so that users have more information about how those models are developed and when they are being used.

Risk Management and AI Governance

Risk management is at the core of the EU AI Act’s approach. Because of this, companies should identify who owns overall responsibility for risk management. Then that person should work with a cross-functional team that includes individuals from legal, privacy, security and the business to map what AI systems the company is developing or using, and for each of those to evaluate the potential risks posed by such systems. The team can then identify how to address or mitigate them.

Related:EU AI Act Passes: How CIOs Can Prepare

There is a strong alignment between good AI governance and the requirements of the EU AI Act. In any AI development, it’s important to have a process to identify, assess and manage risks. Similarly, to ensure good outputs, companies need to use high quality data. It’s essential to monitor the performance of the AI model, as well, to avoid drift and mitigate potential bias. And any AI that makes recommendations should be subject to human oversight. All in all, it requires an end-to-end focus on AI governance. 

From a business strategy perspective, companies must think holistically about how to incorporate AI governance into their development and compliance processes. When developing AI products, businesses need to assess whether they fall within the Act’s reach and establish processes to drive compliance with its obligations. When a company implements AI systems developed by others that fall under the Act, they have to use those systems in accordance with the instructions provided by the system developer and ensure human oversight of the use of any system outputs.  

Related:EU Commission Takes on Disinformation and Election Interference

In many ways, this is like compliance obligations in other areas; however, it will be important for companies to consider the unique aspects of AI -- including new risks not addressed by other legal frameworks -- and to develop the expertise needed to use them responsibly. And, as you implement the Act, there may be provisions that make sense to apply just for offerings put on the European market -- just as companies sometimes apply specific provisions of GDPR only to data originating in Europe. 

Overall, building a strong AI governance program will get you much of the way there. You will still need to account for the specifics of the EU AI Act, but as with a strong privacy program, those are adjustments on top of a strong foundation.

How to Get Started

There are a number of resources and tools available that can provide ideas for design and implementation of AI governance programs. The U.S. National Institute for Standards and Technology has published an AI Risk Management Framework and supporting materials that help guide companies through identifying, mapping, measuring and monitoring risks. Finally, risk management programs from other compliance areas, such as privacy, provide a helpful template for designing a program for AI systems.

When Will the EU AI Act Go Into Effect?

Th EU AI Act provides different timelines to implement its provisions depending on the risk category, and beginning with its publication in the Official Journal after final enactment:

  1. Unacceptable risk AI: six months from passage

  2. General Purpose AI: 12 months

  3. High risk AI: 24 months

  4. All risk categories with some exceptions: 36 months

Conclusion

As with any new law, it will be important to see how it’s applied. With the AI Act, a lot is left to standards development, and so it will be important to see how standards will impact compliance. In addition, there are several related regulatory bodies that will be responsible in Member States for enforcing or advising on aspects of the Act. Watching how that advice develops over time will be informative. But these aspects also provide flexibility, so that the Act can continue to function even as AI technology continues to advance in new and unanticipated ways.

About the Author

Jason Albert

Global Chief Privacy Officer, ADP

Jason Albert is the Global Chief Privacy Officer of ADP where he leads the company’s privacy strategy, governance, and compliance across all markets and regions. He oversees a team of privacy professionals who collaborate with business units, legal, security, and external stakeholders to ensure that ADP's products and services meet the highest standards of data protection and user trust. Jason also advises the senior leadership and the board on emerging privacy trends, risks, and opportunities, and represents ADP in industry associations, regulatory forums, and public policy debates.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights