About LEXplain
LEXplain investigates the possibilities and limitations of using artificial intelligence to make legal decisions in public administration.
Main content
The latest artificial intelligence technologies are based on large language models built through machine learning. Their mode of operation is therefore not fully comprehensible to either ordinary users or experts. This creates a problem in relation to the duty of justification that public authorities must comply with when they make decisions about citizens' rights and duties.
The justification requirement must ensure that the citizen can understand the decision. At the same time, the justification requirement is a way to ensure the quality and legitimacy of public decision making. The justification requirement entails that administrative authorities must make an effort to inform and understand the citizen's situation in order to be able to apply the legal rules correctly and give the citizen certainty about this.
If the technology used in the case processing does not support the justification requirement, it cannot be used in a manner that is satisfactory under the rule of law. LEXplain will investigate how AI technology can be adapted to case processing and decision making in a way that ensures compliance with the legal obligation to provide justificatory reasons.
The project is a collaboration between lawyers and computer scientists from both Denmark and Norway and involves collaboration with tax and social authorities in both countries.
Project summary
The research focus in LEXplain is aimed at understanding how new hybrid AI technologies can be used to support legal decision-making by adapting them to the existing practice of providing justificatory explainability, which is atthe core of the rule of law. The project will aim to tackle the problems associated with contemporary AI technology, especially its inscrutable algorithms. Such black boxed technology used in the context of legal decision making, challenges several rule-of law ideals such as transparency in reasoning, accountability and relevancy of the explanation to the case at hand. In short, the use of AI for legal decision-making challenges lawʼs legitimacy. To better understand this problem and how it may be overcome, LEXplain investigates the legal explainability requirement in both historical, cross-jurisdictional and empirical dimensions and probes into how hybrid-AI, which combines Machine Learning with symbolic AI, might be a solution to the rule of law concerns associated with black boxed AI.
There is a strong need to better understand the relationship between XAI and legal justificatory explanation and how it might be possible to design a hybrid AI architecture that support legal reason-giving for individual decision-making. Investigating “human-in-the-loop” approaches to legal decision-making, LEXplain will examine how public institutions can gain many of the advantages that can be had from AI, while still retain human control over the decision-making process and thereby uphold explainability and rule of law values.
The overarching aim of LEXplain is to create a new knowledge space, where AI explainability meets legal explainability in order to push the “XAI for law” research frontier. To do so, LEXplain organizes its research around the following research question: How can legal justifcatory explainability be understood, supported and implemented in decision-making practices where AI is increasingly becoming available?
Research objectives
The primary objective of LEXplain is to establish new interdisciplinary knowledge on explainable AI (XAI) in the context of law by researching the explainability culture embedded in legal practice, as a basis for understanding how AI can support decision-making under the rule of law.
The secondary objective is to investigate how new forms of hybrid-AI systems can be used to support legal decision making by combining Large Language Models (LLMs) with knowledge and structure obtained from legislation, legal practice and other legal sources.
Research question and design
LEXplain will focus on AI recommendations in the context of individual legal decision-making in public administration in a rule of law. We find that this focus, rather than full AI automation, presents us with the most enriching field of research in terms of both societal and scientific impact. With a “human-in-the-loop” approach to legal decision-making, public institutions can gain many of the advantages that can be had from AI, while still retaining human control over the decision-making process. LEXplain will pursue this approach via the investigation of how a new form hybrid-AI systems can be developed to support legal decision making by combining Large Language Models (LLMs) with knowledge and structure obtained from legislation, legal practice and other legal sources.
The overarching aim of LEXplain, then, is to create a new knowledge space, where AI explainability meets legal explainability in order to push the “XAI for law” research frontier. To do so, LEXplain organizes its research around the following research question:
RQ: How can legal justifcatory explainability be understood, supported and implemented in decision-making practices where AI is increasingly becoming available?
To research the interaction between AI systems for legal decision-making support and the justificatory explainability requirements pertaining to legal decision-making, LEXplain conducts an in-depth exploration of the relationship between legal and computational explainability. It does so through a three-dimensional inquiry into the RQ. Thus, LEXplain will seek to answer the RQ through three overlapping Research Streams.
1: Evolution and differentiation
2: AI explainability support
3: Implementation and transformation
Research stream 1: Evolution and differentiation
RS 1 investigates how legal explainability requirements and explainability culture have evolved up through the second half of 20th and into the 21st Century, primarily through institutional interplay. This RS will have a mixed method approach, using both theoretical doctrinal-analytic and empirical legal research methods, to develop increased understanding of how legal justificatory explicability can be understood. For this part of the project, we will apply several theories of legal interpretation (such as statutory and purposive interpretation), investigating a combination of sources of law on explainability requirements (legislative texts; judicial precedent; Sivilombudet’s practice and recommendations) and legal doctrinal literature: (Textbooks on administrative law; legal doctrinal research articles on the explainability requirement). We will extract criteria for legal explainability quality, encompassing both what has to be explained, how explanation can be performed (i.e what constitutes suitable explanation elements/factors) and why these criteria are essential for reason-giving in a rule of law context. These criteria will then form the basis for examining the administrative decisions. Legal hermeneutics targeting the interplay between legal practice, judicial review of such practice and legal science – will be central research method to extract such criteria. We have selected the fields of tax law and welfare law as the empirical focus of our project. These fields have an urgent need and potential for the use of AI to support legal decision-making in accordance with rule of law requirements because of their large volume in terms of numbers of legal decisions.
Research stream 2: AI explainability support
RS 2 investigates to what extent AI can support legal explainability. This RS will investigate and review the use of hybrid-AI architectures for legal explainability. Before undertaking research into these architectures, we will do a study of existing literature on AI-based recommendation and question-answering systems for individual legal decision-making, with a particular focus on hybrid AI solutions. This will serve as a baseline for the further work in RS2 which will focus on identifying the kind of improvements needed to support legal justificatory explanations, defined according to the criteria identified in RS1. For the purpose of understanding how AI can interact with justificatory explainability in the context of law, we will prototype a three pronged hybrid AI architecture, that combines 1) NLP techniques and Large Language Models such as GPT-4 and open source models, in particular the recently developed Norwegian language models NorGPT and NorBERT, 2) expert systems based on legal knowledge graphs extracted from legal databases (we will use Lovdata and Retsinformation respectively as sources for identifying the authoritative formulations of legal text), and 3) databases of previous cases listed under RS1.
Research stream 3: Implementation and transformation
This RS investigates how hybrid AI systems can be developed and implemented in ways that are compatible with the upcoming AI Act. This RS will also attempt to understand how hybrid AI systems may challenge and possibly transform how legal decision-making work is performed in public administrative practice. To do so, RS3 will initially use a semi-structured interview approach to gain information about how legal caseworkers in the tax and welfare administration in Denmark and Norway perceive the explainability requirement and how they implement it in their day-to-day work (Brinkmann 2015). These interviews will provide a baseline for understanding the organizational context and work culture that AI systems will be implemented into.
