The EU AI Act: an Impact Analysis (part 2) | Hogan Lovells
Legislative progress update – The AI Act has passed final parliamentary vote
Since part 1 of this series, the European Parliament’s committees on Internal Market and Consumer Protection (“IMCO“) and on Civil Liberties, Justice and Home Affairs (“LIBE“) endorsed the AI Act on 13 February 2024. On 13 March 2024, the European Parliament finally adapted the consolidated AI Act. Before the AI Act can now be signed and published in the European Official Journal to become effective, the EU Council will have to formally adopt it at the ministerial level as well.
With this update in mind, and having provided an analysis of the AI Act’s core concepts in part 1 we will continue in this part with our analysis of the AI Act’s impact on businesses and how they can best prepare for compliance.
AI Act’s impact on your business
To ensure compliance and avoid liability risks, it is essential for businesses to swiftly and meticulously evaluate the impact and their own risks with regard to AI in general and under the AI Act in particular – and to prepare for the transformation that AI will bring to every aspect of the business operations. In this respect, preparations for the AI Act should be seen as part of the overall measures that a company must take to deal with the offering and deployment AI properly in general.
While the AI Act seems to focus on the high-risk systems, with supplementary rules for general purpose AI (“GPAI“), it would be wrong to adopt a “wait-and-see” approach, even if your business does not yet develop, provide, deploy or even plan to use a high-risk AI system or GPAI at the time being. The lightspeed development of AI-based functions and their use in every organization underlines the need for a governance structure in any company but scaled to the type of AI system being used.
In this article, we will therefore look into a general approach to AI governance.
AI Governance in any company
To ensure and be able to demonstrate compliance with the new requirements under the AI Act, and to avoid risks and liability, the potential implications of the AI Act will need to form part of every company’s overall AI governance program.
AI governance forms part of the digital governance to be implemented within every business, which requires an interdisciplinary approach taking into account various factors, including legal, ethical, risk-management, strategic, practical and other considerations, and which overlaps, and is closely intertwined, with the organization’s data governance program.
Maintaining an appropriate AI governance and compliance system is one further component of the general duty of care applicable to directors of any corporation. Given the business risks involved with the use of AI-based products and services, including in areas such as privacy, intellectual property and trade secrets, and compliance, it would be fatal to ignore the impact of any use of AI systems for the company. Recent practice further shows that regulators, as well as investors, show increasing interest in a company’s readiness to comply with existing and forthcoming regulations on the use of AI.
A possible four-step approach for building an appropriate AI governance program, as applicable also for businesses without specific obligations for high-risk AI systems or GPAI systems or models, could be as follows:
1. AI mapping & inventory
First of all, a company would have to determine and document in a central AI inventory and repository which AI systems, models and/or output it is going to develop and/or deploy, including various factors, such as the source or provider of the AI, type of technology, intended use, relevant data processed, types of data and information used for the model, groups of individuals affected by the AI system, any use of third party data and technology relationships.
The respective determination of the AI systems should include a clear allocation of responsibilities for the oversight and management of the usage of AI system during its lifecycle.
The documentation should consider various aspects, such as the territorial scope of application, the intended use cases and interfaces to other systems, and the relevant business relationships with AI suppliers or deployers (including documentation of contractual arrangements).
The necessary mapping and preparation does also have a jurisdictional component, and requires to identify relevant applicable legislation and regulatory guidance in the relevant territorial and material scope of application.
The AI mapping and inventory is a living repository, as AI systems, business relationships and use case scenarios constantly evolve over time. The AI Act, as other laws, classifies high-risk systems according to their intended use, so that it is necessary to track the exact use of AI technology (which often, and not only in case of GPAI, provides for broad possibilities of different uses within a company). Therefore, it is essential to implement appropriate procedures within a company to review and update the documentation on a regular basis.
Even if a company does not use AI systems, it should be aware that its contract partners might do so. Accordingly, it should map out its business’s sensitivities and include appropriate safeguards within their vendor and business partner due diligence, to identify whether partners are using, or planning to use, AI and, if so, what guardrails they have implemented or are planning to implement.
2. Impact & compliance gap risk analysis
The next steps include:
- Applicability & Impact Analysis: Taking into account the AI mapping & inventory (step 1), this step requires to assess what laws and other relevant considerations are applicable to the products and services offered, deployed and/or received by your company, and how these laws and considerations impact the business operations.
- Compliance Gap & Risk Analysis: This step requires to evaluate legal compliance gaps and to identify and rate relevant risks, and determine potential compliance measures.
The Applicability & Impact Analysis involves the assessment how the related legal and regulatory landscape for AI in the specific jurisdiction and industry (including the AI Act, sector specific laws, and general laws, such as in the area of IP/trade secrets and data protection) applies to and impacts the specific business of your company.
Within the scope of the AI Act, this requires an appropriate classification of AI systems and scoping of intended use cases. To the extent the AI Act is applicable, business should assess what relevant risk category and set of obligations its specific uses of AI fall into. As we laid out in part 1 the AI Act categorizes AI systems according to the risks of their capabilities and utilization, and allocates respective sets of obligations, with the risk categories being:
- Unacceptable risk – use generally prohibited
- High risk – set of extensive compliance obligations, including conformity assessment
- Limited risk – limited obligations re transparency
- Minimal risk – potential obligations under (voluntary) code of conduct
An additional set of obligations applies to providers of GPAI (in particular, where the AI triggers systemic risks).
The Compliance Gap & Risk Analysis is an essential step in identifying and managing business risks.
- The compliance gap analysis, as a systematic review process, enables the business to identify the difference between current practice and the compliance requirements of the relevant legal, regulatory and industry standards (and internal policies and compliance standards). It enables the organisation to systematically identify the specific areas where it is not currently meeting requirements and take targeted actions to ensure compliance (such as an action plan to revise internal policies, implement new controls or provide training).
- The risk analysis enables the business to assess the likelihood and severity of the risks and consequences associated with any non-compliance identified in the gap analysis. It helps the business to efficiently prioritize the compliance gaps on a risk-based level and effectively start with identifying appropriate mitigation measures. Potential compliance risk include for example regulatory enforcement, including financial penalties as fines, reputational damage and potential disruptions of the operations of the business.
3. Development of an AI strategy & governance program
Based on the findings of the applicability & impact and compliance gap & risk analysis, businesses should determine their overall business strategies on how to integrate the requirements and necessary compliance steps into their broader goals and values as well as already existing structures and procedures, and how to build an effective governance program that ensures compliance with legal requirements while supporting the business objectives.
Steps to consider are:
Determining organizational structure, roles and responsibilities. One of the current challenges for businesses is how to determine the appropriate organizational structure for building an adequate AI governance within their organizations. The diverse nature of the various topics emerging when offering, deploying or using AI require an interdisciplinary and cross-functional coordinated approach across the organization. This will imply developing the overall organizational framework, assigning new roles to certain positions or create entirely new functions, assigning clear responsibilities to these roles, establishing reporting and coordination mechanisms, and setting up cooperation for each stage of the AI lifecycle and the different responsibilities connected to these stages.
Developing policy framework, standards and procedures. This includes determining the policies, standards, and procedures required within an organization not only to ensure, and be able to demonstrate, compliance with the legal requirements under applicable laws, including the AI Act, but also achieve the relevant business objectives, and to protect company interests and assets. This policy framework needs to be developed in light of various legal, ethical and other considerations, and be aligned with company policies and requirements in various other areas, such as data governance, data protection, intellectual property, protection of trade secrets, competition law, IT- and cybersecurity, risk management, and various others.
Implementing technical measures and organizational procedures. This step requires establishing technical and organizational measures to ensure effective execution of the respective governance framework within the organization. In particular, the AI Act obliges e.g. providers of high-risk AI systems to ensure technical measures concerning traceability, transparency, human oversight, data governance, cybersecurity and robustness. Furthermore, organisational procedures must be established. For high-risk AI systems, these procedures are required in particular in context of risk management, technical documentation, quality management, conformity assessment, testing and monitoring, incident reporting and registration.
Furthermore, the effective, binding and enforceable internal implementation of the governance program is needed. This requires inter alia the management buy-in (tone from the top), mechanisms for ensuring a binding nature of policies, standards and procedures, appropriate training and human oversight, ensure AI literacy, and regular monitoring and controls (including potential sanctioning of misconduct within the company).
In particular, appropriate training forms an essential component of an effective internal compliance program. The AI Act requires businesses to establish “AI literacy” among staff and other persons dealing with the operation and use of AI systems on its behalf. AI literacy is understood to refer to skills, knowledge and understanding that allows providers, deployers and affected persons, taking into account their respective rights and obligations, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
Building and maintaining compliance documentation. To be able to demonstrate compliance with applicable legal requirements, international standards and internal policies, the AI governance program requires appropriate documentation of compliance measures and standards implemented within the organization. The AI Act, in particular, specifies certain documentation to be established and maintained by businesses falling under the scope of the Regulation.
In general, components of an appropriate documentation can entail:
- AI policy documentation (as appropriate to roles as provider or deployer), including the company’s general principles, standards and procedures for handling AI, and policies potentially covering various topics such as compliance guidelines for high-risk AI systems/GPAI, risk assessment, conformity assessment and quality management for high-risk AI systems, AI developer guidelines, responsible use policies, content creation guidelines, rules for training and prompting AI, etc.;
- AI asset inventory lists, including approved AI tools and use cases, model uses, and relevant data (sources), etc.;
- AI standard notices and templates, including contract templates, transparency information, template risk assessment/FRIA templates, AI playbook, etc.
- AI repositories (general/ high risk AI/ GPAI) and compliance measures documentation, including technical documentation, record-keeping/logs, risk assessments, and other documentation necessary to comply with the requirements for high-risk AI systems (such as conformity assessments and quality management systems) and GPAIs, etc.;
- AI vendor and business partner documentation, including third party mapping, vendor due diligence questionnaires, vendor compliance audits, reviews, and certifications, third party contracts, etc.;
- AI awareness and training materials for employees, contractors and business partners, and relevant training records, etc.;
- AI audit / review documentation, including audit program and documentation on compliance audits, reviews and controls, third party certificates, etc.
4. Audits, controls & monitoring
First, this means implementing regular audits and controls to review, update and improve the company’s governance program, including the effectiveness of policies, standards and procedures.
Second, the regulatory landscape must be continuously monitored; new laws, jurisprudence, codes, regulatory guidance and good practice standards are emerging on all ends and may bring
new requirements and obligations which require to adapt and refine the company’s governance system
5. Global context
Last but not least, businesses should be aware of the fact that at international level, the EU institutions will continue to work with multinational organizations, including the Council of Europe (Committee on Artificial Intelligence), the EU-US Trade and Technology Council (TTC), the G7 (Code of Conduct on Artificial Intelligence), the Organisation for Economic Collaboration and Development (“OECD”) (Recommendation on AI), the G20 (AI Principles), and the UN (AI Advisory Body), to promote the development and adoption of rules beyond the EU that will have to be aligned with the requirements of the AI Act.