• E-mailMarija.Slavkovik@uib.no
  • Phone+47 55 58 23 77
  • Visitor Address
    Fosswinckels gate 6
    Lauriz Meltzers hus
    5007 Bergen
  • Postal Address
    Postboks 7802
    5020 Bergen

Marija Slavkovik is a  professor at the University of Bergen in Norway. Her area of research is Artificial Intelligence (AI) with expertese in collective reasoning. Slavkovik is active in the AI subdisciplines of: multi-agent systems, machine ethics and computational social choice.

Slavkovik believes that the world can be improved by automating away the borring, repetitive and dangerous human tasks and that AI has a crutial role to play towards this goal. In AI, the big problem she hopes to solve is the efficient self-coordination of systems of artificial intelligent agents.

In machine ethics, Slavkovik is active in engineering machine ethics problems - How can we build autonomous systems and artificial agents that behave ethically? Want to know what is happening in machine ethics since it stopped being an SF-only topic? There is a tutorial for that. Slavkovik co-organised a Dagstuhl Seminar in 2019 on this topic. She is also one of the guest editors of the Special Issue on Ethics for Autonomous Systems of the AI Journal.  

Slavkovik is  the vice-chair of the Norwegian Artificial Intelligence Society and member of the informal advisory group on Ethical, Legal, Social Issues  of CLAIRE. She is in the education committee of NORA curently working on developing a national phd course on AI ethics. 

In computational social choice and multi-agent syste, Slavkovik is particularly active  in Judgment Aggregation. If you are wondering what this is there is a tutorial for that. Her new passion in this field is looking for ways to consider  social network interaction of  agents and what impact that can have on collective reasoning and decision-making, particularly in aggregation. For more on what social network analysis has to do with AI go here. 

Slavkovik was the chair and host of the 16th European Conference on Multi-Agent Systems EUMAS held December 6-7, 2018 in Bergen. Here are the proceedings.  She is also in the board of EURAMAS.


Marija is an active  speaker on issues of AI and Ethics. Links to some given talks, articles and interviews. 

Video & audio


Doctoral students (main superviser)

Past students

  • Flavio Tisi (co-supervision with Sonja Smets). 
  • Einar Søreide Johansen 
  • Hanna Kubacka (co-supervision with Jan-Joachim Rückmann). Related publication: Predicting the winners of Borda, Kemeny and Dodgson elections with supervised machine learning  [pdf]


  • Spring 2022 INFO901Introduction to AI Ethics (graduate course)
  • Automn 2021 AIKI100 Introduction to AI
  • Spring 2021 INFO383 Research topics in AI ethics. 
  • Automn 2020 INFO282 Knowledge representation and reasoning.
  • Spring 2020 INFO381 Research Topics in AI. The topic of the course is AI Ethics.  Detailed program.
  • Autumn 2019 INFO283 Basic Algorithms in Artificial Intelligence. 
  • Spring 2019 INFO284 Machine Learning.
  • Spring 2017 INFO381 Research Topics in AI. The topic of the course is Machine Ethics. Detailed program.
  • Autumn 2016, 2017, 2018 INFO125 Data Management. 

Office hours are by appointment. 



For the freshest list of publications visit Marija's home page, and to see how other people use Marija's publications visit her Google Scholar profile page.


Academic article
  • Show author(s) (2022). Markov chain model representation of information diffusion in social networks. Journal of Logic and Computation.
  • Show author(s) (2022). Computational ethics. Trends in Cognitive Sciences. 388-405.
  • Show author(s) (2021). The social dilemma in artificial intelligence development and why we have to solve it . AI and Ethics. 11 pages.
  • Show author(s) (2020). The complexity landscape of outcome determination in judgment aggregation. The journal of artificial intelligence research. 687-731.
  • Show author(s) (2019). Improving Judgment Reliability in Social Networks via Jury Theorems. Lecture Notes in Computer Science (LNCS). 230-243.
  • Show author(s) (2019). Autonomous yet moral machines. CEUR Workshop Proceedings.
  • Show author(s) (2019). Aggregating Probabilistic Judgments. Electronic Proceedings in Theoretical Computer Science (EPTCS). 273-292.
  • Show author(s) (2018). Classifying the autonomy and morality of artificial agents. CEUR Workshop Proceedings. 67-83.
  • Show author(s) (2018). Aggregation of probabilisitic logically related judgments. NIKT: Norsk IKT-konferanse for forskning og utdanning.
  • Show author(s) (2017). `How did they know?' Model-checking for analysis of information leakage in social networks. Lecture Notes in Computer Science (LNCS). 42-59.
  • Show author(s) (2017). The Norwegian Oil Fund Investment Decider N.O.F.I.D. NOKOBIT - Norsk konferanse for organisasjoners bruk av informasjonsteknologi.
  • Show author(s) (2017). Implementing Asimov’s First Law of Robotics. NIKT: Norsk IKT-konferanse for forskning og utdanning.
  • Show author(s) (2017). A partial taxonomy of judgment aggregation rules and their properties. Social Choice and Welfare. 327-356.
  • Show author(s) (2017). A modified Vickrey auction with regret minimization for uniform alliance decisions. Studies in Computational Intelligence. 61-72.
  • Show author(s) (2016). Iterative judgment aggregation. Frontiers in Artificial Intelligence and Applications. 1528-1536.
  • Show author(s) (2016). Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems. 1-14.
  • Show author(s) (2016). Agenda Separability in Judgment Aggregation. Proceedings of the AAAI Conference on Artificial Intelligence. 1016-1022.
  • Show author(s) (2015). An abstract formal basis for digital crowds. Distributed and parallel databases. 3-31.
  • Show author(s) (2014). Not all judgment aggregation should be neutral. CEUR Workshop Proceedings. 198-211.
  • Show author(s) (2014). Measuring Dissimilarity between Judgment Sets. Lecture Notes in Computer Science (LNCS). 609-617.
  • Show author(s) (2014). How Hard is it to compute majority-preserving judgment aggregation rules? Frontiers in Artificial Intelligence and Applications. 501-506.
  • Show author(s) (2014). Ethical Choice in Unforeseen Circumstances. Lecture Notes in Computer Science (LNCS). 433-445.
  • Show author(s) (2014). A weakening of independence in judgment aggregation: agenda separability. Frontiers in Artificial Intelligence and Applications. 1055-1056.
  • Show author(s) (2016). Engineering Moral Agents - from Human Morality to Artificial Morality (Dagstuhl Seminar 16222). 5. 5. .
  • Show author(s) (2014). A tutorial in judgment aggregation.
Academic lecture
  • Show author(s) (2021). Evaluating AI assisted subtitling.
  • Show author(s) (2021). AI video editing tools What editors want and how far is AI from delivering?
  • Show author(s) (2019). What we talk about when we talk about AI and Ethics.
  • Show author(s) (2019). Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders.
  • Show author(s) (2019). April 22 – 26 , 2019, Dagstuhl Seminar 19171 Ethics and Trust: Principles, Verification and Validation.
  • Show author(s) (2019). Answer Set Programming for Judgment Aggregation.
  • Show author(s) (2017). Who's a good robot.
  • Show author(s) (2017). Towards moral autonomous systems.
  • Show author(s) (2017). Engineering Machine Ethics.
  • Show author(s) (2022). AI Journal Special Issue on Ethics for Autonomous Systems. Artificial Intelligence.
  • Show author(s) (2014). JA4AI - Judgment Aggregation for Artificial Intelligence (Dagstuhl Seminar 14202). Dagstuhl Reports. 27-39.
Academic anthology/Conference proceedings
  • Show author(s) (2019). Multi-Agent Systems - 16th European Conference, EUMAS 2018, Bergen, Norway, December 6-7, 2018, Revised Selected Papers. Lecture Notes in Computer Science 11450. Springer Nature.
Popular scientific article
  • Show author(s) (2017). Condorcet's jury theorem and the truth on the web. Vox publica.
Feature article
  • Show author(s) (2018). Machines That Know Right And Cannot Do Wrong: The Theory and Practice of Machine Ethics. The IEEE Intelligent Informatics Bulletin. 8-11.
  • Show author(s) (2016). Dagstuhl Manifesto - Engineering Moral Machines. Informatik-Spektrum.
Academic chapter/article/Conference paper
  • Show author(s) (2022). Smart Technology in the Classroom: Systematic Review and Prospects for Algorithmic Accountability. 27 pages.
  • Show author(s) (2022). Objective Tests in Automated Grading of Computer Science Courses: An Overview. 30 pages.
  • Show author(s) (2021). Social Bot Detection as a Temporal Logic Model Checking Problem. 16 pages.
  • Show author(s) (2021). Egalitarian Judgment Aggregation. 9 pages.
  • Show author(s) (2021). Digital Voodoo Dolls. 11 pages.
  • Show author(s) (2021). Artificial Intelligence:Is the power matched with responsibility?
  • Show author(s) (2020). Teaching AI Ethics: Observations and Challenges.
  • Show author(s) (2020). Predicting the Winners of Borda, Kemeny and Dodgson Elections with Supervised Machine Learning. 19 pages.
  • Show author(s) (2020). Model-Checking Information Diffusion in Social Networks with PRISM. 18 pages.
  • Show author(s) (2020). Circumvention by design - dark patterns in cookie consent for online news outlets. 1 pages.
  • Show author(s) (2020). Bias mitigation with AIF360: A comparative study.
  • Show author(s) (2020). Addressing the ethical principles of the Norwegian National Strategy for AI in a kindergarten allocation system.
  • Show author(s) (2019). The Complexity of Elections with Rational Actors. 3 pages.
  • Show author(s) (2019). Answer Set Programming for Judgment Aggregation. 7 pages.
  • Show author(s) (2018). On the Distinction between Implicit and Explicit Ethical Agency. 7 pages.
  • Show author(s) (2017). Formal Models of Conflicting Social Influence. 17 pages.
  • Show author(s) (2017). Complexity Results for Aggregating Judgments using Scoring or Distance-Based Procedures. 10 pages.
  • Show author(s) (2014). A Judgment Set Similarity Measure Based on Prime Implicants. 2 pages.
  • Show author(s) (2013). Some complexity results for distance-based judgment aggregation. 13 pages.
  • Show author(s) (2013). Judgment Aggregation Rules and Voting Rules. 14 pages.
  • Show author(s) (2022). A content-aware tool for converting videos to narrower aspect ratios . 109-120. In:
    • Show author(s) (2022). IMX '22: ACM International Conference on Interactive Media Experiences. Association for Computing Machinery (ACM).
  • Show author(s) (2021). Artificial Intelligence: Is the Power Matched with Responsibility? . In:
    • Show author(s) (2021). Meeting the Challenges of Existential Threats through Educational Innovation: A Proposal for an Expanded Curriculum . Routledge.

More information in national current research information system (CRIStin)


  • MediaFutures: Research Centre for Responsible Media Technology & Innovation. Role: co-leader of WP2 – User Modeling, Personalisation & Engagement.

  • Better Video workows via Real - Time Collaboration and AI - Techniques in TV and New Media. Funded by The Research Council of Norway. Type of project: User-driven Research based Innovation (BIA). Grant: NOK 8.4 million. Role: main supervisor of one of the two doctoral students hired on this project.
  • The Machine Ethics Challenge to Artificial Intelligence and Society. Funded by the Strategic Programme for International Research Collaboration of the Univeristy of Bergen. Grant: NOK 75.000. Role: PI. The grant will support the establishment of a highly interdisciplinary international network of collaborators on the topic of machine ethics.



Marija Slavkovik is the project manager of a SAMKUL grant whose goal is to prepare funding proposals to explore the machine ethics issues in modern journalism. The first meeting of the network is held in Bergen, November 29-30. Results here.

Through the support of SPIRE from the Faculty of Social Sciences at the University of BergenMarija Slavkovik  was working to  establish an  international research network  that engages in developing the interdisciplinary research area of logic-based methods for  social network analysis in artificial intelligence (AI). Results here.


This is a detailed description of the modules of the course. Most modules and lectures have a required reading material before class. Make sure you read the texts. If you have a problem accessing some literure, email the course organisers. The lectures will not be recorded. Slides will be made avaialble after the lecture. The references listed under Other material are intended for your further reading, should you want to engage more with the topic.

Module 1: An Introduction to Artificial Intelligence

This module is a crash course into artificial intelligence. We will start with a very brief history of the field, and cover the basic concepts of reasoning, machiene learning, knowledge representation and computational agents.

Module 2: Power and Politics in AI

Module 3: Privacy and AI

This module introduces some of the basic concept of privacy that are relevant for AI.

Lecture 1: What is privacy?

  • Lecturer: Tobias Matzner
  • Abstract: This is a short introduction into the conceptual and socio-technical development of privacy. It identifies central issues that inform and structure current debates as well as transformations of privacy spurred by digital technology. In particular, it highlights central ambivalences of privacy between protection and de-politicization and the relation of individual and social perspectives.
  • Date: March 11, 2022 (friday)
  • Slot: 9:30-10:30
  • Nota bene
  • Reading before class:
  • Other material (videos, tutorials, etc. ): will appear here

Lecture 2: Introduction to differential privacy

Lecture 3: Privacy and the law

  • Lecturer: Malgorzata Agnieszka Cyndecka
  • Abstract: This lecture gives an introduction to the GDPR covering objectives, material and geographical scope, main actors and notions, principles relating to processing of personal data, legal basis for processing of personal data, rights of the data subject, GDPR and risk enforcement.
  • Date: March 11, 2022(Friday)
  • Slots: 11:45-12:30
  • Nota bene: This lecture is a recorded video.
  • Reading before class: Chapters 2 and 3 of the GDPR
  • Other material (videos, tutorial): will appear here

Lecture 4: Overview of relevant legal requirements for AI development

Module 4: Explainable AI

This modul covers topics on explaining the behaviour of an AI system.

Lecture 1: What is an explanation?

Lecture 2: Explainable to whom?

Lecture 3: How machines explain - an introduction to concepts of XAI

  • Lecturer: Inga Strümke
  • Abstract: One of the main challenges of XAI is that machine learning methods do predictive modeling, as opposed to explainable modeling. Their task is to detect potentially complex and non-linear correlations and use these to give the most accurate predictions. Explanations, on the other hand, are supposed to be non-complex and - for some recipients of explanations - even non-linear. Although there are many methods available from the field of XAI, these don’t have a unifying principle, and there are no benchmarks available for comparing them. This makes entering the field potentially overwhelming, and so we will take a step back and discuss the three conceptual ways to approach explaining black box models. This discussion will involve a brief introduction to the arguably most popular XAI methods, namely SHAP and LIME.
  • Date: March 14, 2022(Monday)
  • Slots: 11:45-12:30
  • Reading before class:
  • Other material (videos, tutorials, etc.): might appear here

Lecture 4: Tutorials

For the students that (want to) enoy program, pointers will be given to hands on tutorials where they can try out some of the methods discussed in this and the next modules

  • Date: March 14, 2022(Monday)
  • Slots: 13:30-14:15
  • Links to tutorials: might appear here

Module 5: Fairness and AI

This module covers the problem of ensuring fairness of decisions made by an AI system, but also explores the question of why it is important to have this fairness and what does it mean.

Lecture 1: Unbiased data? Fair AI? Forget it!

  • Lecturer: Maja Van Der Velden
  • Abstract: While we all are affected, or will be, by algorithms, some of us are more vulnerable than others to biased data and unfair AI. Is a focus on unbiased data and fair AI the solution? Is there a universal understanding of fairness? Are there sources of neutral data or can we make existing data sets unbiased? If we answer ‘yes’ on these questions, does it mean that AI can be neutral? In this lecture we will engage with the understanding that technology is not neutral and explore what this means for working towards unbiased data and fair AI.
  • Date: March 16, 2022(Wednesday)
  • Slot: 09:30-10:30
  • Reading before class:
  • Other material (videos, tutorials, etc. ): might apear here

Lecture 2: Why does eliminating discrimination in society matters?

Lecture 3: Practicalities of conducting fairness assessments on ML models

Lecture 4: Debiasing algorithms

  • Lecturer: Marija Slavkovik
  • Abstract: This lecture gives an overview of the algorithmic tools avaialbale for mittigating bias by some machine learning methods. The lecture is sufficiently genreal to be followed by people with no programming experience, but will give pointers on how to proceed forthose who do want to engage hands on.
  • Date: March 16, 2022(Wednesday)
  • Slots: 13:30-14:15
  • Reading before class:
    • Chapter 11.3 and Chapter 11.4 from Ian Foster, Rayid Ghani, Ron S. Jarmin, Frauke Kreuter and Julia Lane 2020. Big Data and Social Science Data Science Methods and Tools for Research and Practice. Routledge. Second Edition.
  • Other material:

Module 6:Transparency and accountability

Lecture 1: Overview of algorithmic accountability

Lecture 2: AI accountability in practice

Lecture 3: Transparency


The exam for INFO901 consists in executing, and writing a report, on a project with a topic from AI Ethics connected to at least one of the lectures in the course. The project can be done individually or in pairs. The project should contain research work approximately equivalent to one conference paper. The report should follow the structure of a conference or a journal article:

  • IntroductionShould include the research problem, hypothesis or topic. Motivation of this problem/hypothesi/topic including grounding in related work. A short description of methodology used (if applicable), scope and success criteria (if applicable). A contribution: how this work advances the state of the art in AI Ethics (and which field are you aspiring to contribute to with this work). Link to a code and/or data repository (if relevant).
  • PreliminariesAll the relevant information from other work that are necessary to be known in other for your project to be understood by the reader
  • Related workEither as second or penultimate section. Describe work that addresses similar research problem, hypothesis or topic as yours. Describe the similarities and differences with your work.
  • 1-3 Chapters of reporting on work, results or argumentation
  • ConclusionsOutline how research problem, hypothesis or topic has been addressed. Outline directions for future work.

The project reports should be between 10 and 15 pages (excluding references) formatted following one of the templates:

Preferably the projects should be written in English. Norwegian is also allowed. If you want to write in another language, you should secure the availability of a mentor fluent in that language.

The students are free to publish the reports as research articles to the venue of their choice.

Deadline for submitting the report: June 10th, 2022

Office hours with the course responsibles: scheduled by need. The method for submiting the proposal will be specified with the topic aproval notification,

Selection of topics

The students should develop a project research problem, hypothesis or topic and submit it for approval by April 4, 2022 by email to marija.slavkovik@uib.no and miriag@ifi.uio.no. Use subject "INFO901 topic for approval".The proposal should not exceed one page and include:

  • Working Title
  • Either a research problem, hypothesis or topic
  • At least one related research article
  • Which class from the course is the project related to
  • A short declaration of aspired contribution (and to which field)
  • Planned methodology

All project proposals that satisfy basic feasibility and connectedness to the course will be approved.

Research groups