In courtrooms and legal offices across jurisdictions, the use and influence of artificial intelligence is becoming increasingly apparent. According to the latest Future of Professionals report by Thomson Reuters, 77% of professionals expect AI to significantly or even transform their work in the next five years – and nearly three-quarters of legal professionals (72%) believe AI will have a positive impact on the field.
These figures reflect growing optimism, but they also sit alongside more radical predictions about the role AI might come to play in the administration of justice.
These predictions range from modest to revolutionary. Some legal professionals envision incremental changes to existing processes, while others anticipate a sweeping transformation of judicial procedures and the fundamental roles of judges and lawyers.
Some of these futuristic expectations involve the implementation of a justice system in which judges and lawyers are replaced by robots capable of delivering verdicts in record time or providing legal advice and defense. It is also envisaged that, sometime in the future, AI will be used to detect whether a witness is lying, or to predict with precision when an offender might reoffend or assess their level of danger after serving their sentence.
In other words, there is a widespread belief that, in the future, this new technology will have the power to profoundly transform one of the most traditional and resistant sectors to change.
Understanding how this change is unfolding requires examining the current situation in several countries. The implementation of AI legal systems varies across jurisdictions – and, beyond specific national approaches, its use in legal practice presents both advantages and common challenges for justice administrations and legal professionals in their practice before courts and tribunals.
On the one hand, AI offers benefits such as task automation, data analysis for more informed decisions, and outcome prediction, thereby improving the efficiency and quality of all legal operators’ work. For example, Kira is an AI-powered contract analysis platform that extracts essential clauses and data from thousands of documents in a fraction of the time required by human lawyers. Similarly, Luminance, developed by Cambridge University researchers and trained on legal documents, identifies legal concepts and anomalous terms within contracts that might present possible risks. However, concerns also arise about biases in data and algorithms, as well as the protection of privacy and data security.
Regarding biases, prejudice and discrimination are inherent risks in all social or economic activities (as outlined in the EU’s 2020 White Paper on AI). Among other reasons, because human decision-making is not immune to error or subjectivity. Therefore, it is essential to institute ex-ante control of the AI system employed, particularly in judicial decisions.
Additionally, the AI system must be subject to a regime of full traceability and transparency in its operation (this is precisely what the Spanish legislator has foreseen). Only in this way can it be ensured that the parties involved and the legal professionals representing them have a full understanding of how these technologies work and, consequently, can present their claims effectively and appeal the decision if necessary. This is without prejudice to the fact that some authoritative voices have warned that algorithms can, at times, operate as black boxes.
The case of Eric Loomis, decided by the Wisconsin Supreme Court in 2016, is a good example of the risks of using AI in the judicial field. In this case, the COMPAS software was used to predict future behavior and risk of reoffending. The algorithm employs a proprietary machine learning model trained on historical criminal data to generate risk scores based on, for example, criminal history, social behaviors, and demographic information. As a result, Loomis was sentenced to six years in prison. He appealed, claiming he was deprived of a fair trial since the sentence was based on data provided by the program, which was not specific to his individual case. Nevertheless, the Wisconsin Supreme Court upheld the use of the algorithm, although it also emphasized that it should not be the sole factor in making a conviction. This decision was later confirmed by the US Supreme Court.
Hopefully, a similar debate will not arise within the EU judicial system, as the Artificial Intelligence Regulation provides safeguards against such practices. In particular, it prohibits the introduction, deployment, or use in the market of AI systems to assess or classify the reliability of individuals based on their behavior or personal characteristics (inferred or predetermined) if it results in harmful or unfavorable treatment.
Another major risk of implementing AI systems in the judicial field concerns the use of personal data to train and improve algorithms. These systems typically require massive data sets to identify patterns and make predictions, with the models analyzing thousands if not millions of examples to establish correlations. It’s therefore not unusual for all this information to include sensitive details about individuals, such as medical, political, or religious histories.
The Court of Justice of the European Union recently addressed the issue of automated decisions in relation to data protection laws (ruling of December 7, 2023, case C-634/21). The case involved a German bank using an automated system to assess the creditworthiness of its clients when granting or denying a loan. The ruling concluded that a loan cannot be automatically denied without human intervention -unless the client has clearly consented- for the sake of data protection.
This decision sets important precedents for how data protection laws should be considered also in automated decisions, highlighting the need for transparency, human oversight, and informed consent. It also underscores the importance of ensuring that AI systems respect data protection principles and fundamental rights.
With these advantages and challenges in mind, it is worth taking a look at ways different countries choose to implement AI in their legal systems. In 2017, China opened its first Internet Court in Hangzhou to resolve disputes linked to online shopping, digital payments, and intellectual property. These government-run courts, now also in Beijing and Guangzhou, operate separately from company-led dispute systems but often integrate with them through cloud platforms and data-sharing tools. Estonia, for example, is a country well-regarded in this field. In 2019, there were reports that an AI system was being developed to handle small civil claims. However, the Estonian Ministry of Justice later clarified that no such “robot judge” project was actually being implemented. Still, the commitment to using AI and technology to streamline court processes remains. In the US, the integration of AI into the legal system has advanced rather quickly. For example, post- pandemic, Arizona implemented the Digital Evidence Center, which allows for the display and exchange of cloud-based evidence, as well as more efficient virtual, in-person, and hybrid hearings.
However, the rapid adoption of AI tools like ChatGPT into the legal field has also raised concerns. Cases have emerged of lawyers using the tool and basing their arguments on entirely fabricated legal precedents. In 2023, New York lawyers were sanctioned for submitting a brief with fictitious citations generated by ChatGPT and, similarly, in 2025 a federal judge in Wyoming fined lawyers for citing non-existent cases that were hallucinated by AI in a lawsuit against Walmart. In both instances, the lawyers admitted to their “inadvertent” mistake, and this only further highlights the need for legal professionals to tread carefully when using AI tools and content.
In 2024, the Silicon Valley Mediation and Arbitration Center published guidelines regarding the use of AI, providing these guidelines establish a roadmap for using AI in arbitration, setting clear principles such as transparency, accountability, and not – delegating decision-making to AI tools (a provision that, while logical and evident, had yet to be formally recognized.)
In the UK, rather than developing and implementing a specific system for the courts (as in Estonia, China, and the US), the British Judicial Office opted to issue guidelines in December 2023, offering recommendations for judges to effectively use public chatbots, while also warning them about the associated risks. These guidelines were refreshed this year, to reflect new concerns about litigants using AI-generated content that could be inaccurate or simply misleading. The update also mentions Microsoft’s Copilot Chat as an appropriate tool for administrative tasks, provided that it protects confidentiality and judicial independence.
Finally, in Spain, a framework has been established that allows the justice system to carry out automated actions with the help of AI. One such action is the generation of draft judicial decisions through AI systems. This initiative is part of the Royal Decree-Law 6/2023, of December 19, aimed at modernizing court operations and efficiency. There has been concern among judges in Spain, however, who have called for ways to ensure AI use in judicial work is able to respect fundamental rights and judicial independence.
As we see, while AI is used in some justice systems to resolve legal disputes, globally there has not been a drastic change yet. In most instances, it is applied only in small-scale cases or for specific tasks within a lawsuit, without altering the judicial system as a whole. While AI is expected to improve the efficiency of judicial systems, the current state of the art does not seem likely to bring about a revolutionary change in the roles of judges and lawyers in the very near future. Essentially, this is an evolution rather than a revolution in legal practice. In other words, judges and lawyers, for the moment, will not be replaced by robots capable of delivering verdicts in record time or providing legal advice and defense. Rather, the main changes being implemented across jurisdictions aim to make data management more efficient. We can expect that AI will generate high-quality data that can help predict the outcome of a case, and that it will be used to organize and make sense of all the jurisprudence in legal databases, facilitating work in the legal field.
For legal practitioners and institutions, it is essential to maintain a realistic perspective, balancing technological advancement with already established legal practices. Human involvement will remain crucial not only for complex reasoning and ethical judgement but also for the interpersonal aspect of legal practice, which AI simply cannot replicate. We should certainly integrate these new technologies into our work, recognizing that they are key to making our work more efficient and providing high-quality, cutting-edge service. Whether sometime in the future we see a radical change in justice administrations, judicial procedures, and the role of lawyers remains an open question. What is clear, though, is that the legal profession is entering a transformation phase, one that requires an openness to change and innovation as well as a vigilance in preserving the tried-and-true fundamental values of justice.
© IE Insights.