Skip to content

New CIArb Guidelines on the Use of AI in Arbitration

Briefing
22 July 2025
8 MIN READ
2 AUTHORS

The Chartered Institute of Arbitrators (CIArb) recently published its guidance  on the Use of AI in arbitration (2025) (Guidelines). The Guidelines provide a framework for the use of artificial intelligence (AI) in arbitration, and are intended to encourage proper use of AI, and to help support practical efforts to mitigate the risks associated with using AI, and which are also well documented in litigation across many jurisdictions.

The CIArb Guidelines are designed to be used in conjunction with institutional rules.

The Guidelines provide a comprehensive overview of AI in arbitration, and are divided into four key sections:

  • benefits and risks
  • general recommendations
  • arbitrators’ powers on the use of AI by the parties
  • the use of AI by arbitrators

The Guidelines also include a template agreement on the use of AI by the parties, and two template procedural orders.

What are the benefits and risks associated with using AI?

The following includes some of the key benefits and risks associated with using AI in arbitration proceedings.

BenefitsRisks
Efficiency. AI-based tools assist in the arbitration process. AI based legal research tools are more adaptable and user-friendly than traditional search technologies.Enforceability. There are risks associated with enforceability of arbitral awards, and adverse implications for the credibility and legitimacy of arbitration. Enforcement may also be complicated in jurisdictions where AI tools are banned or restricted.
Data analysis. AI can run through data and identify patterns far quicker and more accurately than the human reviewer, making spotting conflicts in evidence and arguments more straightforward.Confidentiality. Substantial risks to confidentiality – the AI tool could cross-contaminate different cases, which might result in a breach of confidentiality.
Text generation and summarisation tools. Useful when creating chronologies.Cybersecurity. Minimising  the possibility of cyber threats is essential.
Prediction. AI based tools can run the data and generate legal outcome predictions.Bias. Potential for bias due to selection of specific databases and configuration of particular algorithms.
Jurisdictional acceptance. The use of AI may give rise to challenges in jurisdictions where there is less acceptance.
Hallucinations. Risk of platforms referencing legal arguments and cases that are not real.
Environmental considerations. AI platforms can use a very high level of energy.

Recommendations on the use of AI in arbitration

  • parties and arbitrators need to investigate and consider the benefits and risks of using any prospective AI tool in arbitration.
  • reasonable enquiries should also be made into AI-related laws, regulations, or court rules in the relevant jurisdictions.
  • the duties and responsibilities, which apply in arbitration will continue to apply irrespective of whether AI is used.

Arbitral powers on the use of AI by the parties

The use of AI will not impact on the tribunal maintaining responsibility for the conduct of the proceedings.

Tribunals will be able to give directions regulating the use of AI to preserve the integrity of proceedings and help to ensure enforceability of awards. Where the tribunal requires assistance with understanding the AI tool being used, AI experts can be appointed to advise.

The Guidelines provide for decisions on the use of AI to be recorded in the procedural order. If the parties fail to comply, then the arbitrators can take measures to remedy breaches and take any failures into account when awarding costs.

The importance of party autonomy is also highlighted. The Guidelines state that parties can exercise their autonomy to agree whether and if so, how they wish to use AI tools. Where the arbitration agreement does not discuss the use of AI, the arbitrators will invite the parties to agree. Where parties disagree on the use of AI, the Guidelines provide that tribunal should be requested to make a ruling based on the circumstances of the case. If the tribunal considers that the use of AI by the parties has jeopardised the integrity of the proceedings, it can make a ruling on its admissibility and use, taking into considering the following factors:

  • the benefits/risks associated with the AI tool.
  • the nature and specific features of the AI tool.
  • any applicable laws, regulations or policies, or institutional rules related to the use of AI.
  • the data used to train the model – the parties can be required to disclose this.

Under the Guidelines, the parties will be required to disclose the use of AI , where it has an impact on the evidence, or the outcome of the arbitration. If a party fails to disclose the use of an AI tool, the tribunal can make related enquiries and assess any impact that the failure may have had on the integrity of proceedings.

The Tribunal’s use of AI

The Guidelines state that the tribunal can use AI tools to enhance the efficiency and quality of arbitration but that they must not delegate their decision-making authority to AI. Whilst AI can assist with processing information, the arbitrators should maintain independent judgment and avoid using AI in ways that could compromise the integrity or enforceability of their award. Arbitrators should always independently verify AI-generated information and ensure that it does not unduly influence their decision-making.

Importantly, arbitrators continue to remain fully responsible for all aspects of the award, regardless of any AI assistance.

Arbitrators will need to consult the parties prior to using AI tools and provide the parties with an opportunity to comment. If there is disagreement between the parties and the tribunal, then the AI tool should not be used.

AI and Sustainability

The use of AI comes with a significant environmental cost. Running these technologies requires a lot of electricity resulting in high carbon emissions. AI systems also use large amounts of water to cool the data centres that power them. This raises concerns about sustainability—especially as arbitral institutions (including the  ICC, LCIA, and SIAC), and initiatives like the Campaign for Greener Arbitrations encourage more eco-friendly practices, such as virtual hearings, paperless processes, and choosing greener venues.

Potential for Legal Challenges

If confidential arbitration materials are uploaded to third-party AI tools, this may constitute a breach of an arbitration agreement and/or breach of any applicable confidentiality obligations, leading to a claim for damages or a challenge to the enforceability of the award.

Alternatively, challenges could also arise where parties have expressly agreed to limit or prohibit the use of AI, in which case the use of AI tools could lead to challenges for a breach of due process or procedural fairness. Improper use has the potential to undermine the legitimacy of the proceedings or any award.

Comment

The CIArb’s 2025 Guidelines on AI in arbitration are a necessary and pragmatic response to the growing use of AI in legal proceedings.

The Guidelines acknowledge the benefits of AI; highlight the real risks associated with the use of this technology: from confidentiality breaches to enforceability concerns and algorithmic bias, and provide a useful tool for parties and arbitrators alike.

Danica Douglas, Knowledge Paralegal, Disputes, London, assisted with the preparation of this briefing.

Main Bulletin
International Arbitration Quarterly | Edition Q2/2025
author