Hjem
UiB AI
Nyhet

En europeisk strategi for KI i vitenskap

Det pågår et arbeid i EU med en strategi for KI i vitenskap. UiB har spilt inn fem punkter til strategiprosessen: 1) Tillit og troverdighet; 2) Tverrfaglig og tverrsektorielt samarbeid; 3) Styrking av KI-kompetanse på alle fagområder; 4) Ansvarlig datadeling og sterk infrastruktur; og 5) Langsiktig finansiering av grunnleggende KI-forskning.

Front page of positioning paper, with photo from UiB
Posisjonsnotat forside
Foto/ill.:
UiB AI

Hovedinnhold

Det pågår et arbeid i EU med en strategi for KI i vitenskap. Arbeidet har to hovedretninger:

  • accelerating the adoption of AI by scientists, by creating essential enablers such as improved access to data, computational power and talent.
  • monitoring and steering the impact of AI on the scientific process, addressing science-specific AI challenges such as preserving scientific integrity and methodological rigour.

Du kan lese mer om EUs policyarbeid rundt KI i vitenskap på denne nettsiden.

Årets forskningsetiske dag – den 4. oktober - er dedikert til tverrfaglige diskusjoner om KI og forskningsetikk. Sjekk det fantastiske programmet her!

Les UiBs posisjoneringsnotat nedenfor, eller åpne/last ned dokumentet her.

ON A STRATEGY FOR AI IN SCIENCE

The UiB strongly supports the initiative to develop a European strategy for AI in science. As our societies become increasingly reliant on AI-based digital solutions, Europe must advance research in AI and its theoretical underpinnings. An intensified uptake and critical evaluation of AI in science is crucial for enabling research to tackle societal challenges, such as climate change, ageing populations, diseases, and threats to democracy – as well as societal disruptions caused by AI systems themselves. The strategy should also support critical studies of how AI is applied in science, to safeguard trust in research and ensure alignment with democratic values.

FIVE KEY INPUTS TO THE STRATEGY

To build a resilient and future-oriented research ecosystem, Europe must invest in the scientific foundations of AI and ensure broad academic engagement. This includes connecting computer science (informatics) with natural sciences, medicine, psychology, social sciences, humanities, law, and artistic research. Based on our experience and academic breadth, we highlight the following five key inputs for the upcoming European strategy for AI in science:

  1. Trust and trustworthiness
  2. Cross-disciplinary and cross-sectoral collaboration
  3. Strengthening AI competence in all disciplines
  4. Responsible data sharing and strong infrastructure
  5. Providing long-term funding for basic AI research

1. Trust and trustworthiness

To ensure and enhance trust in the use of AI in research, we must embed transparency, accountability, and inclusivity into the development and the deployment of trustworthy AI systems. European research policy should prioritize open-source AI models and reproducibility of AI-driven scientific findings. We need research from the social sciences, humanities, law and the arts to understand impacts of AI on society, culture and democracy, and to develop frameworks for deciding in which situations and in which ways AI can be beneficial, as well as deciding when it should not be used. AI is increasingly used to analyse non-structured data such as text and images that previously required qualitative methodologies. This requires research on the epistemological changes in using quantitative methodologies on non-structured data. As part of having trust in using commercial AI systems in research, we need more research on the governance and political economy of the AI systems being used - including the notions of lock-ins, automatic updates, sovereignty, compliance, and data ownership. From a Nordic perspective, maintaining public trust in science and science-based policymaking requires clear ethical guidelines and mechanisms for independent oversight. By fostering a culture of responsible use of AI in science, Europe can lead globally in ensuring that AI accelerates scientific discovery in line with democratic values and societal benefit. The European Commission and the ERA Forum’s ‘Living guidelines on the use of Generative AI in Research’ is an excellent step in this direction.

2. Cross-disciplinary and cross-sectoral collaboration

A trustworthy and socially responsible uptake of AI in science requires strong collaboration across disciplines and sectors. The complexity and societal impact of AI demand that its development and application are not confined to technical or disciplinary silos. Collaboration – and funding schemes – must bring together researchers from all disciplines, and include stakeholders from the public sector, industry, civil society, and policymaking. Such cooperation will stimulate the development and application of AI in science to align with human rights, democratic values, regulation, and societal needs. In line with the Nordic tradition of consensus-building and inclusive governance, this collaborative model can serve as a blueprint for Europe. It will help ensure that AI in science evolves in a way that is both innovative, democratically legitimate and in line with established research ethics. This in turn will reinforce the societal legitimacy of science and the institutions that produce it. Cross-disciplinary and cross-sectoral collaboration is not the least crucial for new scientific breakthroughs such as AlphaFold.

3. Strengthening AI competence in all disciplines

To accelerate scientific discovery in the AI era, Europe must educate future AI specialists while also building AI literacy across all research fields and roles. Disciplines such as the humanities, social sciences, and law provide crucial insights into how AI systems shape language, perception, and decision-making – and into what AI cannot accomplish – and should be recognized as key contributors to AI development and governance.

As AI reshapes research methods, researchers need to understand how to use AI tools effectively, ethically, and critically. This includes training in data handling, algorithmic reasoning, and awareness of governance, ethical and societal implications. Researchers must also be equipped to assess not only how to use AI, but whether it is appropriate to use it. Fostering such reflective competence is key to ensuring that AI is applied responsibly and contextually in science. To ensure future research capacity in AI, it is important that AI is well integrated in all education programs.

Higher education institutions in Europe play a central role in building data and AI literacy, which should be embedded in curricula at all levels and supported through lifelong learning. Through research-based education, including continuing education, universities can equip citizens with AI literacy and an understanding of responsible use of AI in science.

4. Responsible data sharing and strong infrastructure

Norway has long-standing, high-quality databases, some of which span several decades. For example, the Norwegian population-based health registries are recognized for their quality, continuity and integration, offering valuable contributions to AI-driven research in medicine and pharmaceuticals. Similar strengths can be found across Europe, where national and regional datasets represent important assets for scientific advancement. To fully unlock the potential of both analytical and generative AI in science, a strong infrastructure and well-functioning data-sharing frameworks are essential. Europe should therefore continue, and expand, its efforts to build secure, FAIR-aligned, and inclusive research data ecosystems. Particular attention must be given to the use of personal data when AI is used in research. As AI enables the processing and aggregation of data that was previously too extensive and unstructured to connect, a continued update of research ethics guidelines is needed, in order to assure the data and AI literacy of researchers.

The development of the EU’s next research and innovation framework programme (FP10) provides a key opportunity to strengthen these capacities. FP10 should include investments in research infrastructures and dedicated initiatives such as AI ‘gigafactories’, capable of supporting foundational and applied AI development across disciplines. These efforts should be aligned with the EU’s broader objectives of scientific excellence, open science, technological sovereignty, and the green and digital transitions, in addition to established research ethics.

5. Providing long-term funding for basic AI research

To be at the forefront of AI-enabled science, Europe must provide stable and long-term funding for foundational AI research. This includes theoretical computer science, algorithmic design, AI ethics, cybersecurity and trustworthy AI, as well as foundational research on the impacts, ethics and cultural aspects of AI from the humanities, social sciences, psychology and law. Such research underpins the development of reliable, energy-efficient, and ethically sound AI systems for science.

The next research and innovation framework programme (FP10) should include dedicated instruments for low-TRL, curiosity-driven research across disciplines, while also fostering links to applied AI development. Recognising the strategic importance of foundational research is key to ensuring European technological sovereignty and scientific excellence.