Competing European Values under the Spotlight of the AI Act
Europe has been a frontrunner in the regulation of artificial intelligence on a global scale. The adoption of the Artificial Intelligence Act (AI Act) defines one – despite important – step of the puzzle of European policy on AI. After the adoption of the Council last week, such an ambitious approach is still surrounded by scepticism, particularly concerning its potential impact on the competitiveness of the European technological ecosystem from a global perspective.
Regulation is often implemented with the intention of protecting fundamental rights and achieving public interest goals including consumer protection, and fair market practices stability, and the AI Act is not an exception to this rule. However, the AI Act has a limited scope when looking at the protection of fundamental rights. Despite the focus on “European values” recalling Article 2 TEU and the important step towards the introduction of the fundamental rights impact assessment, still, its approach is still far from more human-centric oriented legislation such as the Digital Services Act or the General Data Protection Regulation, as particularly underlined by the lack of judicial remedies for infringements of this Regulation.
Within this framework, fundamental rights as one of the primary justifications to legislate only enjoy a limited scope of protection, the regulatory choices in the AI Act cannot counterbalance the potential unintended negative outcomes for the other side of the coin making a liberal democracy. Indeed, the main issues are not only related to the risk of over-regulation in the EU (which is very much welcome) but to the potential (legal) barrier to competition, particularly for the entry reinforcing market consolidation, which would not only affect the internal market but would contribute to creating areas of market and political power affecting constitutional democracies.
The duality of European values
The expansion of European regulation in the field of AI is part of a broader trend which is not connected to the mere booming and spread of artificial intelligence applications. Since the launch of the Digital Single Market Strategy
These values are primarily focused on the respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child, as ensured in Art. 2 TUE and the European Charter of Fundamental Rights. The objective is to ensure that overriding reasons of public interest, such as a high level of protection of health, safety, and fundamental rights, are not left behind. At the same time, we cannot underestimate how the EU has also built its identity based on the need to ensure fundamental freedoms and competition which have played a foundation role in the EU economic integration process since the beginning, they will still play an important role in creating a market for AI in Europe. The regular reliance on Article 114 TFEU for the purposes of harmonising the internal market in areas which primarily are related to democracy, as in the case of the European Media Freedom Act, demonstrates an increasing convergence between market and democracy in Europe (Cseres explores this point here
This European regulatory brutality
Recently, both the revised Market Definition Notice
One could argue, however, on this last point, that the policy brief draws too much of its attention towards the necessary preservation of a plurality in a democratic society, and less of it to the particular context where the General Court delivered that same pronouncement, i.e., the broader analysis of Google’s abusive practices – and not an overinclusive statement of the expansion of the objectives of competition regulation. Even though we can agree with the fact that the wider EU regime seeks to secure democratic values and societies, the increasing overlap of each one of these values under the common denomination of ‘European values’ may well be an exaggeration of an increasingly expeditious manner in which to do away with legal standards, thresholds and procedural safeguards.
In this context, the European Commission’s recent decision fining
The enforcement of risk
Such complexity in the conflation of applicable regulations and European values can also make enforcement more unpredictable. The shift towards European values will require competent authorities to interpret the regulatory framework and to strike a balance between competing constitutional interests. Particularly, the questions around enforcement will be primarily connected to the enforcement of the rules on risk assessment. The different layers of risks, as specified in Annex III for high-risk applications, raise critical interpretative issues with reference to the evolution of different technological applications. Risk indeed is a notion open to possibilities. Unlike traditional legal approaches based on a white-and-black approach shaped by interpretation, risks provide multiple possibilities which could lead to a certain legal consequence. The complexity of enforcing a risk-based approach will be a critical challenge for enforcement authorities, also considering the different approaches to risk followed by their different legal instruments and the interpretation of them by the Court of Justice of the European Union.
Furthermore, the complex vertical and horizontal relationship of power between the European Commission, competition authorities and other public authorities (for one, the European AI Office
The enforcement of the AI Act will still see national authorities designated by each of the Member States as protagonists. Even if the AI Act will not be enforceable until 2026, with some exceptions for specific provisions such as the prohibited AI systems and the provisions relating to generative AI, which will be applicable after 6 and 12 months respectively, the Member States are still called to build an enforcement infrastructure of their own. In this case, the European Artificial Intelligence Office will play a critical role with coordinating enforcement efforts. Unlike in other areas including the GDPR or the DSA, the AI Act provides a coordinating authority for enforcement. It is also important to consider that part of the enforcement in the AI Act will also be in the hands of certifying organisations for high-risk systems and the role of private enforcement, despite limited, pushing groups to lodge complaints to the supervisory authority in order to trigger the application of sanctioning mechanism which in the case of the AI Act could go up to 35000000 euro or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Perspectives for the internal market
The adoption of the AI Act marks a significant step in Europe’s proactive stance towards regulating artificial intelligence. While aimed at protecting fundamental rights and achieving public interest goals, the AI Act also raises concerns about potential unintended consequences, particularly in terms of market competitiveness (the buzzword in European fora for the last months) and the rule of law (see Neves’ paper
The pressure for competition thriving in the internal market would be the result of a broader stretch of resources which only some players can dedicate to understanding the complex web of risk regulation in Europe. Considering the horizontal application of AI across different sectors and the deep connection with the processing of (personal) data, the AI Act has already raised questions about its coordination with other legal measures, particularly the GDPR. It would be enough to mention how the introduction of the fundamental rights impact assessment in the AI Act raises questions about other risk obligations coming from the GDPR, mostly the Data Protection Impact Assessment, and the obligation to assess risks for very large online platforms under the Digital Services Act. Furthermore, the enforcement of the AI Act will require navigating the concept of risk, which is inherently subjective and open to interpretation, and the complex relationship between the supranational and national systems of coordination. This poses a significant challenge for enforcement authorities and raises questions about the consistency of enforcement actions.
The introduction of the AI Act is likely to bring consequences on the functioning of the internal market and, broadly speaking, on innovation. which, however, are not still measurable. Even if it could produce anti-competitive effects in the internal market, the AI Act aims to reposition the rule of law in the digital age by limiting the reliance on self-regulation, including ethical narratives, related to the spread of these technologies. In this sense, it is a central piece of European digital constitutionalism. As a result, the complex framework for competition raises the bar of standards to protect European values which are not merely related to the protection of fundamental freedoms but the protection of fundamental rights and democratic values.
_________
If you, like us, are still wondering about how EU competition law and digital constitutionalism intersect, we are co-chairing a panel including Marco Botta (EUI and University of Vienna), Kati Cseres (University of Amsterdam), Katarzyna Sadrak (DG Competition) and Inês Neves (University of Porto/Morais Leitao) to be held online on 12 June at 5.00 pm CEST time discussing the topic. We’d be glad if you would join us!o do so, just click here