Call us on +44 (0)20 7465 4300
shutterstock_1202889130-min
30 June 2023

Countries Must Act Now Over Ethics of Artificial Intelligence

Sir Robert Buckland, Consultant in our Dispute Resolution Team, delves into the issue of AI in the justice system and what the potential consequences.

The rise of artificial or machine intelligence, for so long a debate about technology and capability, is now becoming the stuff of mainstream politics, and not before time too.  As the race to increase capacity hots up, legitimate concerns as to the safety of this technology and the relative lack of investment into this area are being raised in increasingly alarmist tones in the media.  My view is that Artificial Intelligence is at once a gift and a burden.   A gift, because it will transform many processes that are currently too slow, expensive and laborious.  A burden, because it sets us new challenges that require us to think fundamentally about human activity and its qualities.

Nowhere is this more apparent than in the field of justice. Digitalisation and AI offer speed, efficiency and cost-effectiveness at a time when, post Covid, access to justice is proving difficult.  Millions of cases can be resolved with the use of AI technology.  It is already happening in China and Brazil, for example.  But important questions need to be asked: can the machine ever truly replicate the often very human thought processes that go into decisions on issues such as the credibility of a witness, the granting of bail or care of the children of an estranged couple?

In countries with a strong rule of law and democratic tradition, the integrity of the datasets used to populate justice algorithms will need to be strong and transparent.  When it comes to China’s use of AI in cases, that transparency, to say the least, is missing.  Through our membership of the G7, our globally competitive tech sector and our reputation as a hub for financial and legal services, the UK is well-placed to play a leading role in instigating the development of international principles in the use of AI in the administration of justice.  In doing so, we should be looking not only at the preparation and delivery of judgments but the tendering of legal advice too.

When I was trained as a part-time Crown Court Judge, at the end of the course at the Judicial College, the then Lord Chief Justice of England and Wales, Lord Judge,  reminded all of us that in our work and our judgments, we should not lose sight of our humanity; in other words, our experiences as human beings, as opposed to our training as lawyers. This was a reminder that, although the law is there to be applied, judicial discretion is shaped not just by our legal training and experience, but our experiences as humans.

I am in the process of carrying out research on these issues in my role as a Senior Fellow at Harvard, but as leaders from Rishi Sunak to Joe Biden openly discuss AI governance, and with London set to host an international AI Summit in the Autumn,  I propose that Governments and respective legal sectors of like mind should work to agree an international AI rules-based system founded on the following principles where the state uses AI in the administration of justice:

 

  1. AI can be used for legal research, advice, preparation of submissions and judgments but to ensure full transparency there must be disclosure of the nature of its use and the underlying foundation model used to create the database;
  2. AI should not be used to ultimately determine issues requiring reflective judgement and where the public interest demands human involvement, for example determining criminal liability, custodial sentences and family issues including the care of children;
  3. If AI is to be used to determine cases, any consent obtained from the parties needs to be informed as per principle one;
  4. Where AI is used to determine cases, any fact-based outputs must be verified;
  5. If AI is used to determine a case outcome, a right of appeal to a human decision-maker must be available.

Legal sectors, including the judiciary and the legal professions, in each country should work with each other to develop agreements and protocols as to the use of AI with a clear outcome in mind.  The outcome should be to see more cases and problems resolved than ever before for more and more people, but with the essential ethics of justice itself being maintained and enhanced.  The UK Government’s recent AI White Paper sets out general regulatory principles that are very much in line with the proposals that I outlined, which now need to be used in different sectors.

The time to act on justice AI ethics is now.  Whilst I am excited and enthused about the potential of AI for legal services and justice, I want it to operate within clear and well understood ethical boundaries that will well serve the interests of wider society whilst protecting the essential human qualities of justice.

 

Rt Hon Sir Robert Buckland KBE KC MP is Senior Counsel at Payne Hicks Beach LLP, Senior Fellow at the Mossavar Rahamni Center for Business & Government at Harvard Kennedy School and is a former Lord Chancellor.

About the Author
Robert Buckland
View Profile