- E-mailmarija.slavkovik@uib.no
- Phone+47 55 58 23 77
- Visitor AddressFosswinckels gate 6Lauritz Meltzers hus5007 BergenRoom613
- Postal AddressPostboks 78025020 Bergen
Marija Slavkovik is a professor at the University of Bergen in Norway. Her area of research is Artificial Intelligence (AI) with expertese in collective reasoning. Slavkovik is active in the AI subdisciplines of: multi-agent systems, machine ethics and computational social choice.
Slavkovik believes that the world can be improved by automating away the borring, repetitive and dangerous human tasks and that AI has a crutial role to play towards this goal. In AI, the big problem she hopes to solve is the efficient self-coordination of systems of artificial intelligent agents.
In machine ethics, Slavkovik is active in engineering machine ethics problems - How can we build autonomous systems and artificial agents that behave ethically? Want to know what is happening in machine ethics since it stopped being an SF-only topic? There is a tutorial for that. Slavkovik co-organised a Dagstuhl Seminar in 2019 on this topic. She is also one of the guest editors of the Special Issue on Ethics for Autonomous Systems of the AI Journal.
Slavkovik is the vice-chair of the Norwegian Artificial Intelligence Society and member of the informal advisory group on Ethical, Legal, Social Issues of CLAIRE. She is in the education committee of NORA curently working on developing a national phd course on AI ethics.
In computational social choice and multi-agent syste, Slavkovik is particularly active in Judgment Aggregation. If you are wondering what this is there is a tutorial for that. Her new passion in this field is looking for ways to consider social network interaction of agents and what impact that can have on collective reasoning and decision-making, particularly in aggregation. For more on what social network analysis has to do with AI go here.
Slavkovik was the chair and host of the 16th European Conference on Multi-Agent Systems EUMAS held December 6-7, 2018 in Bergen. Here are the proceedings. She is also in the board of EURAMAS.
Marija is an active speaker on issues of AI and Ethics. Links to some given talks, articles and interviews.
Video & audio
- Kunstig intelligens og maskinetikk Akademisk lunsj Bergen Library Available as podcast.
- Who's a good robot? CHRISTIE-KONFERANSEN: The robots are coming, are you? Video? (Jump to 5:36:00)
- For the good of all, CLAIRE (jump to 2:20)
- Ben Gyford's Machine Ethics Podcast.
- AI and Ethics. Teaching machines to behave with TechNadine.
- AI Inspiration talk. Nordic testbed network.
Text
- The usefullness of useless AI by Marija Slavkovik. The AI Hub.
- Press for Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders :New Scientist, Daily Mail
- People know when to break the rules, but machines don’t by the Tek-Lab.
- Shame on you robot by TORHILD DAHL
- Researcher profile by UiB
- Condorcet’s jury theorem and the truth on the web by Marija Slavkovik in VoxPublica
- „Ништо не е бесплатно – ние трампаме дел од нашето време и внимание“ by Ирена Трајковска in VezilkaMagazine
Doctoral students (main superviser)
- Than Htut Soe started in February 2018 and is a doctoral student on the BIA funded project Beter Video workflows via Real-Time Collaboration and AI-Techniques in TV and New Media. Soe's thesis explores the cooperation between AI methods and human interaction in video editing. Recent publication: Circumvention by design - dark patterns in cookie consent for online news outlets at NordiCHI 2020: 19:1-19:12.
- Mina Young Pedersen started in October 2019. Pederesen's thesis explores the interplay of logic reasoning and social networks.
Past students
- Flavio Tisi (co-supervision with Sonja Smets).
- Einar Søreide Johansen
- Hanna Kubacka (co-supervision with Jan-Joachim Rückmann). Related publication: Predicting the winners of Borda, Kemeny and Dodgson elections with supervised machine learning [pdf]
Courses:
- Spring 2022 INFO901Introduction to AI Ethics (graduate course)
- Automn 2021 AIKI100 Introduction to AI
- Spring 2021 INFO383 Research topics in AI ethics.
- Automn 2020 INFO282 Knowledge representation and reasoning.
- Spring 2020 INFO381 Research Topics in AI. The topic of the course is AI Ethics. Detailed program.
- Autumn 2019 INFO283 Basic Algorithms in Artificial Intelligence.
- Spring 2019 INFO284 Machine Learning.
- Spring 2017 INFO381 Research Topics in AI. The topic of the course is Machine Ethics. Detailed program.
- Autumn 2016, 2017, 2018 INFO125 Data Management.
Office hours are by appointment.
For the freshest list of publications visit Marija's home page, and to see how other people use Marija's publications visit her Google Scholar profile page.
- (2023). The Jiminy Advisor: Moral Agreements among Stakeholders Based on Norms and Argumentation. The journal of artificial intelligence research. 737-792.
- (2023). Mythical Ethical Principles for AI and How to Attain Them. 29 pages.
- (2023). Egalitarian judgment aggregation. Autonomous Agents and Multi-Agent Systems. 15-32.
- (2023). Detecting bots with temporal logic. Synthese. 79.
- (2023). Automatic Detection of Manipulative Consent Management Platforms and the Journey into the Patterns of Darkness.
- (2022). Smart Technology in the Classroom: Systematic Review and Prospects for Algorithmic Accountability. 27 pages.
- (2022). Probabilistic Judgement Aggregation by Opinion Update. 12 pages.
- (2022). Objective Tests in Automated Grading of Computer Science Courses: An Overview. 30 pages.
- (2022). Netreason: Reasoning about social networks. Journal of Logic and Computation. 1015-1016.
- (2022). Markov chain model representation of information diffusion in social networks. Journal of Logic and Computation. 1195-1211.
- (2022). Logic of Visibility in Social Networks. 17 pages.
- (2022). Computational ethics. Trends in Cognitive Sciences. 388-405.
- (2022). Automating Moral Reasoning (Invited Paper). 1 pages.
- (2022). AI Journal Special Issue on Ethics for Autonomous Systems. Artificial Intelligence.
- (2022). A content-aware tool for converting videos to narrower aspect ratios . 109-120. In:
- (2022). IMX '22: ACM International Conference on Interactive Media Experiences. Association for Computing Machinery (ACM).
- (2021). Social Bot Detection as a Temporal Logic Model Checking Problem. 16 pages.
- (2021). Egalitarian Judgment Aggregation. 9 pages.
- (2021). Digital Voodoo Dolls. 11 pages.
- (2021). Artificial Intelligence: Is the Power Matched with Responsibility? . In:
- (2021). Meeting the Challenges of Existential Threats through Educational Innovation: A Proposal for an Expanded Curriculum . Routledge.
- (2021). AI video editing tools What editors want and how far is AI from delivering?
More information in national current research information system (CRIStin)
Ongoing:
MediaFutures: Research Centre for Responsible Media Technology & Innovation. Role: co-leader of WP2 – User Modeling, Personalisation & Engagement.
- Better Video workows via Real - Time Collaboration and AI - Techniques in TV and New Media. Funded by The Research Council of Norway. Type of project: User-driven Research based Innovation (BIA). Grant: NOK 8.4 million. Role: main supervisor of one of the two doctoral students hired on this project.
- The Machine Ethics Challenge to Artificial Intelligence and Society. Funded by the Strategic Programme for International Research Collaboration of the Univeristy of Bergen. Grant: NOK 75.000. Role: PI. The grant will support the establishment of a highly interdisciplinary international network of collaborators on the topic of machine ethics.
PAST
Marija Slavkovik is the project manager of a SAMKUL grant whose goal is to prepare funding proposals to explore the machine ethics issues in modern journalism. The first meeting of the network is held in Bergen, November 29-30. Results here.
Through the support of SPIRE from the Faculty of Social Sciences at the University of Bergen, Marija Slavkovik was working to establish an international research network that engages in developing the interdisciplinary research area of logic-based methods for social network analysis in artificial intelligence (AI). Results here.
Description
This is a detailed description of the modules of the course. Most modules and lectures have a required reading material before class. Make sure you read the texts. If you have a problem accessing some literure, email the course organisers. The lectures will not be recorded. Slides will be made avaialble after the lecture. The references listed under Other material are intended for your further reading, should you want to engage more with the topic.
Module 1: An Introduction to Artificial Intelligence
This module is a crash course into artificial intelligence. We will start with a very brief history of the field, and cover the basic concepts of reasoning, machiene learning, knowledge representation and computational agents.
- Lecturer: Marija Slavkovik
- Contact: marija.slavkovik@uib.no
- Date: March 7, 2022 (Monday)
- Slots: 10:45-11:30, 11:45-12:30, 13:30-14:15, 14:30-15:30
- Nota bene: you can skip this module if you have basic familiarity with AI
- Instead of reading before class interact with the How normal am I installation.
- Other material (videos, tutorials, etc. ):
- An artists representation of unethical smart tech.
- You do not need intelligence for an unethical computing. A report on the UK The Post Office case where a mistake in the IT system caused people to be persecuted and find for crimes they did not commit.
- How artificial intelligence is proposed to be used to protect the EU borders
Module 2: Power and Politics in AI
- Lecturer: Miria Grisot
- Contact: miriag@uio.no
- Abstract: This is an introduction to power and politics in AI in Information Systems Research from a sociotechnical perspective. It gives an overview on the current discussions in the field such as how AI changes work and organizing in organizations, and how to re-think management of technology in relation to AI with emphasis on the ethical aspects.
- Date: March 9, 2022 (Wednesday)
- Slot: 9:30-10:15, 10:30-11:15, 11:45-12:30, 13:30-14:15
- Reading before class:
- Other material:
Module 3: Privacy and AI
This module introduces some of the basic concept of privacy that are relevant for AI.
Lecture 1: What is privacy?
- Lecturer: Tobias Matzner
- Abstract: This is a short introduction into the conceptual and socio-technical development of privacy. It identifies central issues that inform and structure current debates as well as transformations of privacy spurred by digital technology. In particular, it highlights central ambivalences of privacy between protection and de-politicization and the relation of individual and social perspectives.
- Date: March 11, 2022 (friday)
- Slot: 9:30-10:30
- Nota bene
- Reading before class:
- Other material (videos, tutorials, etc. ): will appear here
Lecture 2: Introduction to differential privacy
- Lecturer: Fedor Fomin
- Abstract: Differential privacy (DP) is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It can been seen as a mathematical model of some aspects of privacy. The lecture gives a gentle introduction to this topic and discusses what guarantees diferential privacy makes and does not make.
- Date: March 11, 2022(Friday)
- Slots: 10:45-11:30
- Reading before class:
- A Gentle Introduction to Differential Privacy by Tim Titcombe
- Other material:
- The detailed core reference on differential privacy: Cynthia Dwork and Aaron Roth (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trnds in Theoretical Computer Science Vol. 9, Nos. 3–4 (2014) 211–407./li>
- A recommended popular science book: Michael Kearns and Aaron Roth (2019). The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press, Inc., USA.. We cannot provide access to this. A A talk by Michael Kearns on the book can be seen here:
Lecture 3: Privacy and the law
- Lecturer: Malgorzata Agnieszka Cyndecka
- Abstract: This lecture gives an introduction to the GDPR covering objectives, material and geographical scope, main actors and notions, principles relating to processing of personal data, legal basis for processing of personal data, rights of the data subject, GDPR and risk enforcement.
- Date: March 11, 2022(Friday)
- Slots: 11:45-12:30
- Nota bene: This lecture is a recorded video.
- Reading before class: Chapters 2 and 3 of the GDPR
- Other material (videos, tutorial): will appear here
Lecture 4: Overview of relevant legal requirements for AI development
- Lecturer: Kari Laurmann
- Abstract: This lecture covers the topics of privacy, GDPR requirements and AI. It aims to give a basic overview of relevant legal requirements for AI development, and some real case examples of how to apply the law in practices (examples from the sandbox)
- Date: March 11, 2022(Friday)
- Slots: 13:30-14:15
- Nota bene: all of the reports from Datatilsynet can be found here.
- Reading before class:
- It's getting personal. Privacy trends 2017
- Out of Control. How consumers are exploited by the online advertising industry 14.01.2020 (Mandatory: the first 43 pages ).Summary of the report can be found here.
- Other material (videos, tutorials, etc.): will appear here
Module 4: Explainable AI
This modul covers topics on explaining the behaviour of an AI system.
Lecture 1: What is an explanation?
- Lecturer: Tim Miller
- Abstract: In this session, we will study how people explain complex things to each other. When we talk about explainable AI, we are discussing how a machine can explain to someone how and why it makes decisions. What should an explanation from a machine look like, given that they calculate quite differently to how we think? This talk argues that we should take inspiration from how people explain things to each other, as this gives us useful pointers for how people help others to understand how and why things happen.
- Date: March 14, 2022(Monday)
- Slot: 09:30-10:30
- Reading before class:
- Other material:
- if you would an overview of the technical details of explainability and interpretability:Vaishak Belle and Ioannis Papantonis, 2021. Principles and Practice of Explainable Machine Learning. Frontiers in Big Data
- If you would like to learn more about how people explain things to each other:Tim Miller 2018. Explanation in Artificial Intelligence: Insights from the Social Sciences.
Lecture 2: Explainable to whom?
- Lecturer: Alexander Kempton and Polyxeni Vassilakopoulou
- Abstract: AI explanations are important for many reasons and can be addressed to many different audiences. For instance, experts need explanations to evaluate or improve AI-enabled systems, impacted groups need to make sense of what lies behind AI-enabled systems that affect them. In this session, we will first explore different stakeholders´ needs for AI explanations and we will then focus on explanations for end-users. We will discuss the role of explanations for the use of AI enabled-systems and how different sets of users have different explainability needs.
- Date: March 14, 2022(Monday)
- Slots: 10:45-11:30
- Reading before class:
- Other material (videos, tutorials, etc..): might appear here
Lecture 3: How machines explain - an introduction to concepts of XAI
- Lecturer: Inga Strümke
- Abstract: One of the main challenges of XAI is that machine learning methods do predictive modeling, as opposed to explainable modeling. Their task is to detect potentially complex and non-linear correlations and use these to give the most accurate predictions. Explanations, on the other hand, are supposed to be non-complex and - for some recipients of explanations - even non-linear. Although there are many methods available from the field of XAI, these don’t have a unifying principle, and there are no benchmarks available for comparing them. This makes entering the field potentially overwhelming, and so we will take a step back and discuss the three conceptual ways to approach explaining black box models. This discussion will involve a brief introduction to the arguably most popular XAI methods, namely SHAP and LIME.
- Date: March 14, 2022(Monday)
- Slots: 11:45-12:30
- Reading before class:
- Other material (videos, tutorials, etc.): might appear here
Lecture 4: Tutorials
For the students that (want to) enoy program, pointers will be given to hands on tutorials where they can try out some of the methods discussed in this and the next modules
- Date: March 14, 2022(Monday)
- Slots: 13:30-14:15
- Links to tutorials: might appear here
Module 5: Fairness and AI
This module covers the problem of ensuring fairness of decisions made by an AI system, but also explores the question of why it is important to have this fairness and what does it mean.
Lecture 1: Unbiased data? Fair AI? Forget it!
- Lecturer: Maja Van Der Velden
- Abstract: While we all are affected, or will be, by algorithms, some of us are more vulnerable than others to biased data and unfair AI. Is a focus on unbiased data and fair AI the solution? Is there a universal understanding of fairness? Are there sources of neutral data or can we make existing data sets unbiased? If we answer ‘yes’ on these questions, does it mean that AI can be neutral? In this lecture we will engage with the understanding that technology is not neutral and explore what this means for working towards unbiased data and fair AI.
- Date: March 16, 2022(Wednesday)
- Slot: 09:30-10:30
- Reading before class:
- Other material (videos, tutorials, etc. ): might apear here
Lecture 2: Why does eliminating discrimination in society matters?
- Lecturer: Kristoffer Chelsom Vogt
- Abstract: A perspective from sociology
- Date: March 16, 2022(Wednesday)
- Slots: 10:45-11:30
- Reading before class:
- Anna Lauren Hoffmann (2019) Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse, Information, Communication & Society, 22:7, 900-915
- Mike Zajko(2022) Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates.
- Joyce K, Smith-Doerr L, Alegria S, et al. (2021) Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius.
- Other material (videos, tutorials, etc. ): might apear here
Lecture 3: Practicalities of conducting fairness assessments on ML models
- Lecturer: Robindra Prabhu
- Abstract: As AI solutions have become more ubuiquitous in society, so have concerns about the ethical and social fall out that follow in their wake. Reports of biased, unfair and socially misaligned models, have triggered a growing body of academic research on “fair, transparent and accountable” ML. Industry and policy makers have responded with a plethora of principles and guidelines for “responsible AI”, lawmakers are proposing new regulatory frameworks to curtail the risks associated with this new, emerging technology, and auditors are developing algorithmic audits.
- All of this notwithstanding, it remains unclear how these separate contributions should translate into practical interventions in the development process. How exactly, does one go about conducting a fairness assessment of a model in practice? In this lecture we will engage with this question, using NAV’s model for predicting the duration of sick leave as a case in point. We will touch upon the challenges of
- Going from principles to practice
- Bridging the divide between legal concetps of fairness and technical fairness metrics
- Implemented checks and balances into the develelopment process
- Socio-technical blindspots and shortcomings of any fairness assessment
- Date: March 16, 2022(Wednesday)
- Slots: 11:45-12:30
- Reading before class:
- Wachter, Sandra and Mittelstadt, Brent and Russell, Chris, Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law (January 15, 2021). West Virginia Law Review, Vol. 123, No. 3, 2021.
- Barocas, Hardt, and Narayan (2022). Chapter 5 “Testing discrimination in practice”. Fairness and Machine Learning.
- Other material (videos, tutorials, etc..):
Lecture 4: Debiasing algorithms
- Lecturer: Marija Slavkovik
- Abstract: This lecture gives an overview of the algorithmic tools avaialbale for mittigating bias by some machine learning methods. The lecture is sufficiently genreal to be followed by people with no programming experience, but will give pointers on how to proceed forthose who do want to engage hands on.
- Date: March 16, 2022(Wednesday)
- Slots: 13:30-14:15
- Reading before class:
- Chapter 11.3 and Chapter 11.4 from Ian Foster, Rayid Ghani, Ron S. Jarmin, Frauke Kreuter and Julia Lane 2020. Big Data and Social Science Data Science Methods and Tools for Research and Practice. Routledge. Second Edition.
- Other material:
- Play with some debiasing tools. A web interactive tutoral made available by AIF360.
- Does TIKTOK have a race problem? This piece of investigative journalism by Forbes shows an approach that journalists took to detect bias without having access to the algorithm.
- If AI is the problem, is debiasing the solution? EDRi is a european network of nongoverement organisations dedicated to defending rights and freedoms online. The linked articly summarises a recent report they have comissioned (Beyond Debiasing: Regulating AI and its Inequalities, authored by Agathe Balayn and Dr. Seda Gürses). The report outlines the limits of technical debiasing measures as a solution to structural discrimination and inequality reinforced or propagated through AI systems.
Module 6:Transparency and accountability
Lecture 1: Overview of algorithmic accountability
- Lecturer: Maranke Wieringa (they/them)
- Abstract: As research on algorithms and their impact proliferates, so do calls for scrutiny/accountability of algorithms. In the lecture I will briefly introduce accountability theory from public administration, introduce a sociotechnical understanding of algorithmic systems, and then discuss algorithmic accountability. The importance of algorithmic accountability is discussed, and I will introduce some strands of research which are particularly useful when starting to realize algorithmic accountability.
- Date: March 18, 2022(Friday)
- Slot: 09:30-10:30
- Reading before class:
- Other material:
- Jennifer Cobbe, Michelle Seng Ah Lee, Jatinder Singh. 2021. Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems. In Conference on Fairness, Accountability, and Transparency (FAccT ’21), March 3–10, 2021, Virtual Event, Canada. ACM, New York, NY, USA, 12 pages.
- Joshua A. Kroll. 2021. Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. In FAccT ’21: ACM Conference on Fairness, Accountability, and Transparency, March 2021, Toronto, CA (Virtual). ACM, New York, NY, USA, 14 pages.
Lecture 2: AI accountability in practice
- Lecturer: Alexander Kempton and Polyxeni Vassilakopoulou
- Contact: polyxenv@uia.no / alexansk@ifi.uio.no
- Abstract: In this session, we will introduce and discuss accountability in the context of AI and approaches for achieving AI accountability in practice. Accountability relates to the obligations of those designing, deploying and operating AI-enabled technologies (sometimes expressed as their “responsibility for”), the interrogation ability of supervision authorities and those affected by AI-enabled technologies and also, the post-hoc sanctioning potential for blamable agents (when things go wrong). The use of AI-enabled technology can introduce issues for and require new ways in establishing accountability. During the two hour session we will :
- Introduce different accountability aspects and their relationship to managing AI
- Have a group-based discussion around accountability issues and AI
- Discuss the importance of making data work visible for AI accountability
- Date: March 18, 2022 (Friday)
- Slot: 10:45-11:30, 11:45-12:30
- Nota bene
- Reading before class
- reading after class:
- Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barner, P. & Mitchell, M. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
- Selbst, A. D. (2021). An Institutional View Of Algorithmic Impact Assessments.Harvard Journal of Law & Technology (35)
- Other material:
Lecture 3: Transparency
- Lecturer: Nick Diakopolous
- Abstract: This session will introduce a framework for applying information transparency to algorithmic decision-making systems and discuss challenges in implementing such a framework. Participants will be engaged in active learning to apply the framework
- Date: March 18, 2022(Wednesday)
- Slot: 14:30-15:30
- Nota bene:
- Reading before class:
- N. Diakopoulos. (2020) Transparency. Oxford Handbook of Ethics and AI. Eds. Markus Dubber, Frank Pasquale, Sunit Das. May, 2020
- Margaret Mitchell, et al (2019) “Model Cards for Model Reporting,” Proceedings of the Conference on Fairness, Accountability, and Transparency (2019), 220-229
- Turilli, Matteo, and Luciano Floridi (2009). “The Ethics of Information Transparency.” Ethics and Information Technology 11, no. 2 (2009).
- Other material: may appear here
Exam
The exam for INFO901 consists in executing, and writing a report, on a project with a topic from AI Ethics connected to at least one of the lectures in the course. The project can be done individually or in pairs. The project should contain research work approximately equivalent to one conference paper. The report should follow the structure of a conference or a journal article:
- IntroductionShould include the research problem, hypothesis or topic. Motivation of this problem/hypothesi/topic including grounding in related work. A short description of methodology used (if applicable), scope and success criteria (if applicable). A contribution: how this work advances the state of the art in AI Ethics (and which field are you aspiring to contribute to with this work). Link to a code and/or data repository (if relevant).
- PreliminariesAll the relevant information from other work that are necessary to be known in other for your project to be understood by the reader
- Related workEither as second or penultimate section. Describe work that addresses similar research problem, hypothesis or topic as yours. Describe the similarities and differences with your work.
- 1-3 Chapters of reporting on work, results or argumentation
- ConclusionsOutline how research problem, hypothesis or topic has been addressed. Outline directions for future work.
The project reports should be between 10 and 15 pages (excluding references) formatted following one of the templates:
- LaTeX2e Proceedings Templates download (zip, 309kb)
- Microsoft Word Proceedings Templates (zip, 559kb)
Preferably the projects should be written in English. Norwegian is also allowed. If you want to write in another language, you should secure the availability of a mentor fluent in that language.
The students are free to publish the reports as research articles to the venue of their choice.
Deadline for submitting the report: June 10th, 2022
Office hours with the course responsibles: scheduled by need. The method for submiting the proposal will be specified with the topic aproval notification,
Selection of topics
The students should develop a project research problem, hypothesis or topic and submit it for approval by April 4, 2022 by email to marija.slavkovik@uib.no and miriag@ifi.uio.no. Use subject "INFO901 topic for approval".The proposal should not exceed one page and include:
- Working Title
- Either a research problem, hypothesis or topic
- At least one related research article
- Which class from the course is the project related to
- A short declaration of aspired contribution (and to which field)
- Planned methodology
All project proposals that satisfy basic feasibility and connectedness to the course will be approved.