Hjem
The Centre for Social Algorithms

Varselmelding

There has not been added a translated version of this content. You can either try searching or go to the "area" home page to see if you can find the information there
The Centre for Social Algorithms

Research

What do the biased, opaque, unethically behaving computer programs have in common? To what degree can algorithmic interventions address fundamental social problems? We explore these questions in our centre.

Algo
Foto/ill.:
Pexels

Hovedinnhold

Algorithms are to computer programs what architectural blueprints are to  houses. When electricity and indoor plumbing were invented, we changed the way in which we design houses. Since roughly ten years ago, a radical change has happened in how we use computers. We now need to redesign the algorithmic "blueprint" to accommodate this change and also make algorithms responsive and responsible to society. 

Algorithms are clever designs in which a mathematical problem can be broken down in a way that optimises the use of a particular resource such as: time, communication and coordination with other algorithms, or memory space. It specifies which operations should be executed on the input data, in which order, and what results should be returned. Mathematical models are a representation of a problem in a style of a mathematical function. For example, profit is income minus expenses. An algorithm determines how the operation “minus” is executed step by step. A programmer uses an algorithm to develop a computer program in a programming language of choice. 

Algorithms were originally designed under many assumptions of operation that formed a virtual working envelope. The main feature of the working envelope is that it segregates computation from non-experts that use its result. The very first computers, indeed were rooms in which computation experts would walk in. The access to this room was limited to people who “knew what they were doing”. Like the high fashion of the pre world war world, an algorithm was custom built to fit the customer’s problem - a specific instance of a computational problem in a specified predictable context. A trained expert would mathematically model the problem of interest and identify the necessary input. The input information would be collected and represented in a formal language by domain experts and computation experts. The abilities of the people who use the output are accounted for and inform the mathematical model of the problem being solved. Although computers became objects in a room, the algorithms for them continued to exist in a virtual working envelope whose real world contact is limited to two points: input and output. The algorithm is “blind” to anything that occurs outside of the designated input and has no power outside of the output it produces.

Today the working envelope is broken. This is a direct consequence of the world’s increased need for computation. Going back to the envelope approach is not possible without giving up on using computation. What is required is a change of perspective. We need to reconsider how mathematical models and algorithms are designed for underspecified contexts of input and output. 

At the centre we consider the entire life-cycle of the input information, how it informs the mathematical model of the problem being solved, how an algorithm is fixed to this model, but also the lifecycle of the output: how does computation impact the world it serves. By considering the entire lifecycle of information input and computation output we research algorithms that are designed to operate without a working envelope, directly in a socio-technical society. Data, mathematical modeling and algorithms are taken both as individual challenges and as participants in the process of finding computational solutions. In each of these areas we explicitly consider ethical issues that arise from the non-existence of a working envelope. Because machine learning has played an important role in expanding computation to problems that have traditionally only been handled by people, we consider machine learning algorithms explicitly. 

We study how to develop algorithms that can be held socially accountable. In the first place this involves understanding how to address the lack of input predictability, lost with the breaking of the envelope. Namely we design algorithms that can handle data that is dynamic, streamed, distributed, or partially known. We are explicitly concerned with the problem of designing social interacting algorithms that are robust to measurement errors, outliers and other types of corruption. We study how features of various forms of societal and individual impacts can be detected and used to inform the algorithm design.