Digital Humanities Network at UiB
Guest lectures and workshop

How to do research on algorithms when you're not a programmer

Media, society and culture are increasingly affected by algorithms that do everything from recommend YouTube videos to assessing the risk a criminal has of committing a new crime. Taina Bucher and Ysabel Gerrard will discuss how they have researched algorithms using social science and humanities methodologies. The talks will be followed by a workshop.

Photograph of a spiderweb connected to twigs.

This event is a collaboration between UiB's Research group for media use and audience studies and the Digital Humanities Network


10:15: Ysabel Gerrard: Circumventing content moderation on social media: A tale of two algorithms

11:15: Taina Bucher: Speculative algorithm methods and algorithm as speculation

12:00: A light lunch will be served.

12:45: Workshop. A selection of five minute lightning talks by UiB researchers about research cases, with discussion and feedback from Taina Bucher, Ysabel Gerrard and the research community.

16:00: End


10:15: Ysabel Gerrard: Circumventing content moderation on social media: A tale of two algorithms

Abstract: Social media companies make a series of algorithmically-enforced decisions about what counts as ‘problematic’ content. Some of these rules are applied through hashtag moderation: the automated process of either blocking search results for particular tags or issuing public service announcements (PSAs) when users search for troubling terms. For example, the banning of Instagram, Pinterest and Tumblr hashtags relating to eating disorders and self-harm in 2012 (Gerrard, 2018). But hashtag moderation does not always work, as users often circumvent the rules by simply not using hashtags or by using what Chancellor et al (2016) call ‘workaround tags’ (e.g., #anorexia becomes #anorexiaa). Although the hashtag has become an indicator of where problematic content can be found, this has produced limited understandings of how such content actually circulates.

Perhaps most pressingly, the talk shows how platforms work directly against their own content moderation policies by recirculating controversial content through recommendation systems. An example of this is the tale of British teenager, Molly Russell, whose father found that Pinterest had been recommending self-harm and suicide-related content to her email account in the months leading up to her suicide (Gerrard and Gillespie, 2019). Using eating disorder and self-harm communities as case studies, this talk asks how we can research algorithmically-altered experiences, paying especially close attention to some of social media’s most vulnerable users.

This talk tells a tale of two algorithms, proposing new research methods for social scientists and scholars from the humanities to understand two academically neglected yet vital aspects of social media users’ communicative experiences: (1) the circumvention of automated methods of content moderation, and (2) algorithmically recommended content. It offers a commentary on how social media users circumvent algorithmically enforced rules, and how platforms pay a game of cat-and-mouse to keep up. 

Bio: Dr. Ysabel Gerrard is a Lecturer in Digital Media and Society at the University of Sheffield (UK). She researches social media content moderation with an emphasis on the gendered aspects of platforms’ decisions and practices. Ysabel is the current Vice Chair of ECREA’s Digital Culture and Communication section, the Book Reviews Editor for Convergence: The International Journal of Research into New Media Technologies, and a former Microsoft Research New England intern. Her writing and research have been featured in venues like BBC News,VICEand WIRED, and her recent policy contributions to Instagram have been credited in venues like ELLEThe Guardian and The Washington Post.

12:15: Taina Bucher: Speculative algorithm methods and algorithm as speculation

Speculative algorithm methods and algorithm as speculationAs algorithms have become an ubiquitous features of society and culture, the need for social scientists and scholars from the humanities to understand their workings and impact is pressing. Yet there is a sense of uncertainty and, perhaps, insecurity on part of the public and academic community as to what constitutes proper terminology and ways of knowing algorithms. The boundaries of expert communities are preserved by demarcating those who have the right to speak from those do not (Seaver, 2017). Not too seldom these boundaries are maintained by the social scientists and humanities scholars themselves. In this talk I want to challenge this terminological and methodological anxiety and suggest that positing media scholars as epistemic and disciplinary outsiders is not just a distraction but also wrong. If we want to engage with the question of how to study algorithms, two things need to first be undone. First, we need to critically interrogate the assumption of an algorithm proper, and second, question the idea of what counts as expert knowledge and who that expert is. Following on from some of the arguments made in my book IF…THEN about the multiplicity of algorithms, this talk proceeds to advance an approach to algorithms informed by what has been variously termed inventive and/or speculative methods (Dunne and Raby, 2013; Lury and Wakeford, 2012; Michael, 2016). To speculate methodologically means resisting the temptation to treat methods as “predetermined entities that exist separate from the research event” (Springgay and Truman, 2018: 204). Instead, speculative methodology is about challenging conventional responses and interpretations by prompting and opening up the prospective. Providing a few examples of what a speculative algorithm methods might look like in practice, the talk ultimately makes the argument that to rethink, speculatively, our understanding of algorithms also entails seeing algorithms as speculation.

Bio: Taina Bucher is Associate Professor in Screen Cultures at the University of Oslo. From 2013- 2019, she worked as an assistant professor and associate professor in communication and IT at the University of Copenhagen. She has written widely on the relationships and entanglements between algorithms, social and political concerns -  examining how users experience and make sense of algorithmic power and politics. Her recent book IF…THEN: Algorithmic power and politics (2018, Oxford University Press) provides an account of the algorithmic media landscape – paying close attention to the multiple realities of algorithms, and how these relate and coexist with our everyday lives. She is currently writing a book on Facebook for Polity Press, and involved in research projects looking at the changing role of personal information in the platform society, and the digitisation of the Norwegian media and cultural sector.



Are you or have you done research on algorithms without being a program? What methods did you use, or are you planning to use? What problems did you run into? 

We are looking for UiB researchers (including PhD and MA students) to give 5 minute lightning talks about specific research methodologies relating to algorithms. Please email a short text (500-1000 words) about your research case to hallvard.moe@uib.no and jill.walker.rettberg@uib.no by 29 October. Please include questions or statements about your method to encourage discussions. We will select cases that will work together to encourage discussion. 

The workshop will be organized around the lightning talks, with input from our guests for the day, Taina Bucher and Ysabel Gerrard, and from other workshop participants.