Participating in the AI Centre TRUST
On Wednesday 11 June, the Prime Minister announced that TRUST – the Norwegian Centre for Trustworthy AI – will be one of six new AI centres in Norway. TRUST is a large national consortium of research institutions and scholars, which, through transdisciplinary research, aims to ensure that Norway contributes at the forefront of developing and using responsible, inclusive, and robust AI systems.
Main content
The research group for tort and insurance law at the Faculty of Law, University of Bergen, is delighted to be part of the largest legal research community within AI research in Norway.
The researchers are engaged in intensive collaboration with other scientific disciplines, including the social sciences, natural sciences, and philosophy. In Bergen, we will have particularly close cooperation with Canada’s AI institutes: the Vector Institute, the Schwartz Reisman Institute, the Acceleration Consortium, IVADO, and AMII, as well as the European Centre of Tort and Insurance Law, led by Director and Professor Ernst Karner.
The researchers will also participate in clusters with Norwegian user partners, where interdisciplinary solutions can be tested across the operational areas of the 44 partners.
Legal issues in tort and insurance law will be explored in most of TRUST’s 14 research areas, including:
- Attribution of Responsibility:
We will work across disciplines to analyse how technical methods can support legal requirements for explanation and accountability, including in tort law and under the EU AI Act (product safety regulation).
- Causation in AI Systems:
We will contribute to analyses of how to prove actual causation (not just correlation) between processes in AI systems and harm – a fundamental condition for liability. This includes new methods to meet legal standards of proof for causation, especially in cases of indirect discrimination or harm.
- Uncertainty Analysis:
We contribute to interdisciplinary research on how uncertainty in AI predictions affects the allocation of legal responsibility.
- Quality Assurance and Compliance:
Our perspective is part of research on governance mechanisms and frameworks for risk assessment in AI systems, ensuring compliance with ethical and legal standards. This includes processes that provide developers and users with clarity on how to avoid liability.
- Digital Twins and Hybrid Models:
We research how responsibility, data control, and enforcement can be managed in complex AI systems using digital twins. These are used, for example, to simulate harm scenarios during the development or monitoring of AI systems.
- Emergent Behaviour:
Anne Marie Frøseth co-leads a research area analysing unwanted or harmful effects of unintended interactions between AI systems. Such behaviour makes it difficult to identify the cause of harm and apply ordinary liability rules. We will analyse weaknesses in current law and propose new models of liability allocation that can be implemented across jurisdictions.