Navigating the competition law landscape in the age of generative AI
Generative AI (or Gen AI) has the potential to change our lives and the face of industry irrevocably. This brings with it unknown risks. One of those (but certainly not the only one, as explained in our AI Toolkit) is the interplay between competition law and Gen AI.
Gen AI is fast becoming the favourite new topic of countless competition regulators. Authorities around the world are scaling up their expertise in relation to the Gen AI ecosystem (or stack) and Gen AI-driven collusion and information exchange. The undercurrent is a desire not to allow the history of perceived under-enforcement in tech markets to repeat itself. The relevance of this topic goes beyond tech markets, however. It is important for any company (thinking about) using Gen AI in their business.
Against this background, it is important for in-house counsel to anticipate regulatory trends and prepare for potential compliance challenges and risks associated with Gen AI. Lodewick Prompers, partner in our Brussels office, sat down with Thibault Schrepel, a leading scholar on AI and competition, to discuss and provide insight on this fast-evolving field of competition law enforcement. You can access the interview here.
In this blog post we set the scene, providing background on the Gen AI ecosystem and explaining the concerns already identified by competition authorities in relation to the use of Gen AI.
The Gen AI ecosystem
As opposed to other AI systems, Gen AI systems create novel outputs. The Gen AI ecosystem consists of multiple interconnected and interdependent layers, each of which plays a critical role in the overall functionality and efficiency of Gen AI systems. At its most basic, the Gen AI ecosystem can be split into the following layers:
Competition concerns in relation to Gen AI
Gen AI can positively impact competition in many ways. It can lead to higher quality, lower prices and potentially more personalised products and services. It can enhance efficiency (automating basic tasks) and innovation. It can increase market transparency and facilitate market entry and access (in particular by making data more accessible and reducing information asymmetries, thereby creating a level playing field and reducing barriers to entry). It can reduce human error and exceed human performance. You could even monitor employees’ compliance with competition laws.
At the same time, competition authorities around the world are considering potential concerns in relation to Gen AI. We have summarised below the common themes we see emerging:
- The setup of the Gen AI ecosystem: Competition authorities are putting a large focus on the infrastructure layer, driven by concerns around the control over key inputs (and potential bottlenecks) at the upstream level, such as access to specialised chips, computing power, data and technical expertise, and the impact of that setup on (disruptive) innovation by smaller market participants. Authorities seem to apply the traditional “Big Tech” and “winner takes all” framework to the Gen AI stack. The UK CMA even introduced a new acronym in a recent report for the most significant firms – “GAMMAN” (Google, Apple, Microsoft, Meta, Amazon and Nvidia). At the same time, AI markets are characterised by a diverse range of a constantly increasing number of highly successful players and evolving business models (just think about OpenAI). So, one can argue that it is far from clear where there is, or where there will be, market power, which also warrants caution.
- Partnerships within the Gen AI ecosystem: In keeping with the zeitgeist, regulators have displayed significant interest in the emerging strategic partnerships between the major cloud computing firms and Gen AI developers. The UK CMA alone opened five merger investigations in relation to such partnerships, including Microsoft’s hiring of Inflection staff (which the EU was unable to continue reviewing). Other authorities are considering changes to merger rules to catch AI partnerships.
- The deployment of Gen AI on markets: Concerns that have been raised include the risk that algorithms can allow competitors to share competitively sensitive information, fix prices or collude on other terms or business strategies; and the risk that algorithms may enable firms to undermine competition through unfair price discrimination or exclusion.
Current competition rules and enforcement mechanisms provide a baseline for addressing some of the concerns set out above. However, authorities are also considering whether the current rules are entirely sufficient, given the unique and evolving nature of AI technologies. At the same time, there is a potential trade-off between different policy objectives. For example, while open-source models may foster competition, these could raise other risks around AI safety. Similarly, imposing regulatory obligations on firms could make it harder for smaller firms to comply and therefore compete. How these tensions will be resolved is far from clear, but what is certain is that regulatory attitudes are evolving at an unprecedented pace.