Skip to content

Regulators’ joint statement on competition in artificial intelligence

Briefing
5 August 2024
12 MIN READ
1 AUTHOR

On 23 July 2024, the UK’s Competition and Markets Authority (“CMA”),the European Commission, the US Department of Justice (“DoJ”) and the US Federal Trade Commission (“FTC”) (together, the “Competition Authorities”) published a joint statement on competition in artificial intelligence (“AI”).1

Focusing specifically on the risks to competition and consumers presented by generative AI
foundation models and AI products, the joint statement was made by:

  • Margrethe Vestager, Executive Vice-President and Competition Commissioner, European Commission;
  • Sarah Cardell, Chief Executive Officer, CMA;
  • Jonathan Kanter, Assistant Attorney General, DoJ; and
  • Lina M. Khan, Chair, FTC.

Generative AI foundation models (i.e. the pre-trained AI systems behind apps like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Meta’s Llama) are so named because they act as the foundation for the development of more complex and sophisticated models. By using self-supervised learning and transfer learning, the models apply information learned about one situation to others, boosting accuracy whilst remaining cost-effective.

The joint statement acknowledges that while these new AI services have great potential benefits, there are also clear risks that require ongoing vigilance. The Competition Authorities underlined their commitment to work to ensure effective competition and the fair and honest treatment of consumers and businesses, guided by their respective laws. We published a briefing in April 2024 examining the EU AI Act alongside other jurisdictions’ approaches to AI regulation.2

Joint Statement

The Competition Authorities claim that given the rapid evolution of generative AI foundation models in recent years, and the many unknowns about the precise trajectory these tools will take, we are approaching a “technological inflection point”, i.e. a point of change in technology where a new approach leads to significant improvement or disruption. Such inflection points can introduce new means of competing, catalysing opportunity, innovation, and growth. But the Competition Authorities emphasise the need to be vigilant and to safeguard against tactics that could undermine fair competition, to ensure that the public can reap the full benefits.

Risks to competition and consumers

The Competition Authorities consider that there are three chief competition risks posed by generative AI foundation models and AI products.

  1. Concentrated control of key inputs. There are a number of critical ingredients needed to develop foundation models (e.g. specialised chips, substantial computing power, data at scale, and specialist technical expertise). These ingredients could allow some companies to exploit existing or emerging bottlenecks across the AI stack (i.e. the layered framework of tools and technologies that allows the AI system to operate efficiently and effectively). These companies could limit the scope of disruptive innovation, or morph it to their own advantage, at the expense of fair competition.
  2. Entrenching or extending market power in AI-related markets. Large incumbent digital firms already benefit from strong accumulated advantages (e.g. enjoying substantial market power at multiple levels of the AI stack) and such firms could extend or entrench their positions by controlling the distribution channels of AI or AI-enabled services to people and businesses, to the detriment of future competition.
  3. Arrangements involving key players. There have been a number of partnerships, financial investments, and other arrangements between firms involved in the development of generative AI to date. ‘Reverse acqui-hires’, in which Big Tech companies take over the employees and in some cases license the technology of AI startups without acquiring the startups outright, have become increasingly popular of late. Microsoft is currently under investigation by the CMA for hiring certain former employees of Inflection AI,3 Amazon is being investigated by the FTC for a similar arrangement with Adept,4 and Microsoft is also at the centre of a three-pronged antitrust probe involving the European Commission, CMA and FTC for its $13bn investment into OpenAI. In June 2024, the European Commission decided not to proceed with a merger review into the Microsoft-OpenAI partnership due to a lack of evidence that Microsoft controls OpenAI, but Margrethe Vestager, the bloc’s Competition Commissioner, announced that the Commission had remaining questions on whether certain exclusivity clauses in the arrangement could have a negative effect on competitors.5 6 The joint statement accepts that not all arrangements involving key AI players will prove harmful, but in some cases Big Tech companies could use them to undermine or subsume competitive threats.

The Competition Authorities acknowledge that other competition risks, in addition to those listed above, can arise when AI is deployed in markets, e.g. the risk that algorithms can allow competitors to share competitively sensitive information, fix prices, or collude on other terms or business strategies in violation of competition laws; or the risk that algorithms may enable firms to undermine competition through unfair price discrimination or exclusion.

The joint statement also notes that “AI can turbocharge deceptive and unfair practices that harm consumers”. For example, firms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy; firms that use business customers’ data to train their models could also expose competitively sensitive information; and consumers should always be kept informed about when and how AI applications are employed in the products and services they purchase or use. The CMA, the DOJ and the FTC, all of which have jurisdiction to enforce consumer protection law, reiterated their commitment to remain vigilant of any consumer protection threats that may be derived from the use and application of AI.7

Competition protection principles

The Competition Authorities note that whilst issues of competition in AI will often be fact-specific, three common principles can help to preserve competition and foster innovation.

  1. Fair dealing. When firms with significant market power engage in exclusionary tactics, they can discourage investment and innovation by third parties, undermining competition.
  2. Interoperability. The Competition Authorities say that they will closely scrutinise any claims that interoperability requires sacrifices to privacy and security. Competition and innovation around AI will likely be greater the more that AI products and services and their inputs are able to interoperate with each other.
  3. Choice. The Competition Authorities say that they will scrutinise lock-in mechanisms that could prevent companies or individuals from being able to seek or choose meaningful other options, since having the choice between diverse products and business models would benefit businesses and consumers in the AI ecosystem. They also say they will scrutinise investments and partnerships between incumbents and newcomers, to ensure that these agreements are not sidestepping merger enforcement or handing incumbents undue influence or control in ways that undermine competition.

HFW comment

In the joint statement, the Competition Authorities concede that their legal powers differ. The EU has a comprehensive AI regulatory regime in the EU AI Act, which entered into force on 1 August 2024. The former UK Conservative Government published a cross-sector and outcome-based framework for AI regulation in February 2024, and the current UK Labour Government had planned to introduce an AI Bill
in the 2024 King’s Speech, although those plans have since been delayed. In the US, President Biden issued an Executive Order on Safe, Secure, and Trustworthy AI in October 2023, and issued new guidance in March 2024 on how federal agencies can and cannot use AI.

As yet, the US and UK have stopped short of introducing hard law on AI, citing the rapidly changing nature of the area, and noting that to take action before they fully understand the risks and appropriate mitigations would be premature. Indeed, whilst proponents of the EU AI Act have praised the bloc for rapidly implementing regulations to promote safety, critics have said the law is too vague and restrictive, and risks stymying growth and innovation.

As the Competition Authorities note, if the risks described above materialise, they will likely do so in a way that does not respect international boundaries. This joint statement represents a commitment by the four signatories to share knowledge and resources, in order to combat more effectively the risks they describe. It is the second such agreement the UK has entered into after hosting the AI Safety Summit at Bletchley Park in November 2023. On 1 April 2024, following commitments made at the summit, the UK and US signed a landmark bilateral agreement on AI safety, laying out plans to pool technical knowledge and capabilities for the purpose of co-operative AI testing.8 On 24 July 2024, one day after the publication of the joint statement, the UK Government announced a new UK-India Technology Security Initiative that will see the two countries collaborate on a range of important technologies, including telecoms, critical minerals, quantum, biotech, advanced materials, semiconductors – and AI.9 The overarching aim of these agreements is to place the UK in a central role in the global drive towards AI development and governance.

Next steps

Whilst the UK’s bilateral agreements with the US and India appear to focus on growth, safety and mutual cooperation, this joint statement reads as a stark warning to firms with substantial market power in AI- related sectors.

The main message of the Competition Authorities is that they are aware of the potential to engage in anti-competitive practices in these sectors, and they plan on robustly tackling any infringing businesses.
Key AI players will need to be particularly careful when entering into partnerships and arrangements, and should avoid acting in ways that could be seen to be concentrating control of key inputs, or entrenching or extending market power in AI- related markets. They will generally need to have regard to the principles of fair dealing, interoperability and freedom of choice.

All businesses facing a barrier or restraint should be alive to potential competition law infringements which they could bring to the attention of the Competition Authorities by way of complaint or challenge directly.

“The joint statement acknowledges that while these new AI services have great potential benefits, there are also clear risks that require ongoing vigilance.”

Footnotes

  1. Competition and Markets Authority. Joint statement on competition in generative AI foundation models and AI products. Available at: Joint statement on competition in generative AI foundation models and AI products – GOV.UK (www.gov.uk)
  2. HFW. European Parliament Approves Landmark Artificial Intelligence Act. Available at: European Parliament Approves Landmark Artificial Intelligence Act – HFW
  3. Competition and Markets Authority. Microsoft / Inflection inquiry. Available at: Microsoft / Inflection inquiry – GOV.UK (www.gov.uk)
  4. Reuters. Exclusive: FTC seeking details on Amazon deal with AI startup Adept, source says. Available at: Exclusive: FTC seeking details on Amazon deal with AI startup Adept, source says | Reuters
  5. The Register. Antitrust latest: Europe’s Vestager warns Microsoft, OpenAI ‘the story is not over’. Available at: Microsoft’s $13B OpenAI deal faces fresh EU scrutiny • The Register
  6. The CMA is currently investigating the Microsoft-OpenAI arrangement to determine whether it constitutes a notifiable merger under the Enterprise Act 2002, following an invitation to comment that closed in January 2024. The FTC is also scrutinising the deal before deciding whether to open a formal antitrust case against the parties.
  7. Responsibility for the enforcement of EU consumer protection law lies with national authorities in Member States.
  8. Department for Science, Innovation and Technology. UK & United States announce partnership on science of AI safety. Available at: UK & United States announce partnership on science of AI safety – GOV.UK (www.gov.uk)
  9. Department for Science, Innovation and Technology. Foreign Secretary meets Indian Prime Minister Modi and launches landmark Technology Security Initiative. Available at: Foreign Secretary meets Indian Prime Minister Modi and launches landmark Technology Security Initiative – GOV.UK (www.gov.uk)
Download Breifing

Download the PDF version of ‘Regulators’ joint statement on competition in artificial intelligence’

Download