Home
UiB AI
Seminar

AI Alignment

The event brings together experts, researchers, and industry professionals to explore, discuss, and advance our understanding of the challenges and solutions related to aligning artificial intelligence systems with human values and goals.

Illustrasjon fra https://betterimagesofai.org/
Photo:
Daniela Zampieri / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Main content

The event brings together experts, researchers, and industry professionals to explore, discuss, and advance our understanding of the challenges and solutions related to aligning artificial intelligence systems with human values and goals.

Watch the recording of the seminar through this link (coming).

The objectives and expected outcomes of this seminar are as follows:
- Participants will gain a deeper understanding of the challenges associated with AI alignment, including biases, interpretability, and value misalignment.
- The seminar will facilitate interactive discussions to develop concrete solutions and strategies for addressing AI alignment challenges.
- Participants will have the opportunity to network with leading experts and build connections that can lead to ongoing collaborations and partnerships

This event is a collaboration between Universitetsfondet and UiB AI.

Programme:

09:00-09:30: Registration and coffee

09:30-09:40: Introduction by moderator Samia Touileb, Department of Information Science and Media Studies at University of Bergen

09:40-10:10: Tom Potter, BBC: "Building Responsible AI Tools for Public Service Journalism"

10:10-10:40: Leonora Onarheim Bergsjø, HiØ og UiA: "Ethical Risk Assessment of AI"

10:40-11:00: Coffee break

11:00-11:30: Rune Nyrup, Aarhus university: "Rigorous Evaluation Methods for AI Explainability: How can philosophy and computer science collaborate?"

11:30-12:00: Holli Sargeant, St. John's College, Cambridge: "Encoding Equality: The Incompatibility of Algorithmic Logic and Law"

12:00-12:45: Lunch (included)

12:45-13:15: Jan Broersen, Utrecht university: "Do LLMs mean what they say?"

13:15-13:45: Davide Liga, University of Luxembourg: "Mechanistic Interpretability and Moral Stance in Large Models"

13:45-14:00: Coffee break

14:00-14:30: David Samuel, University of Oslo: " Aligning Norwegian LLMs to user preferences and expectations"

14:30-15:00: Ryan Marinelli, University of Oslo: Something about safety or security

15:00-15:45: Wrap up and discussions