UiB AI #9 Trustworthy AI

Hvordan jobber vi med tillit til kunstig intelligens? Møt våre forskere og delta på vårt neste UiB AI-seminar!

word cloud


Kunstig intelligens i dag er kanskje ikke menneskesentrisk ennå, men det er definitivt i samarbeid med mennesker. Det betyr at folk bruker KI for å øke evnene sine: Spamfilteret sparer meg tid, veiviseren hjelper meg med å orientere meg, taletranskribering hjelper meg med å tekste videoer, videoredigeringsprogrammet hjelper meg å endre fra vertikalt til horisontalt format uten å miste viktige objekter i videoen, og så videre. Dessuten krever KI konstant menneskelig støtte: For å rense, merke og generelt forhåndsbehandle data for å lære algoritmer, for å identifisere og rette feil, for å gjøre de uvanlige ikke-typiske oppgavene som en KI ikke kan håndtere. Mye har blitt sagt om påliteligheten til KI de siste årene. Tillit er en relasjonell egenskap mellom mennesker som kan lette eller hindre samarbeid. Pålitelighet er en verdi som vi ønsker at KI skal forholde seg til.

Institutt for informasjonsvitenskap og medievitenskap og Det samfunnsvitenskapelige fakultet er opptatt av de sosiale og samarbeidende aspektene og egenskapene til KI. I denne utgaven av UiB AI-seminarserien viser vi frem fire forskningseksempler på hvordan vi jobber med tillit og KI. De korte presentasjonene vil bli etterfulgt av en paneldebatt.

Program (seminaret er på engelsk)

Trustworthy journalism through AI by Andreas Lothe Opdahl

Quality journalism has become more important than ever due to the need for quality and trustworthy media outlets that can provide accurate information to the public and help to address and counterbalance the wide and rapid spread of disinformation. At the same time, quality journalism is under pressure due to loss of revenue and competition from alternative information providers. This talk discusses and gives examples of how recent advances in Artificial Intelligence (AI) provide opportunities for - but also pose threats to - production of high-quality journalism.

Trust but verify by Rustam Galimullin

The notion of an intelligent agent is central in AI, and it encompasses such entities as autonomous vehicles, healthcare robots, and us, humans. For agents and groups thereof to be efficiently adopted, we cannot rely purely on unconditional trust, and we should require that their behaviour is reliable and safe. This can be done by employing the mathematical techniques of formal specification and verification that were initially developed for ensuring correct execution of computer programs. In the talk, we will argue that such formal techniques lead to better, and ultimately safer, AI agents.

Creating Embodied Artificial Trustworthiness by Ragnhild Mølster (presenter) and Jens Elmelund Kjeldsen

The creation of embodied artificial trustworthiness: Embodied AI in social, political, and journalistic communication.Since humans first began contemplating on who one could believe and trust, the most important characteristics of trustworthiness have been the true, the real, and the authentic. The untrustworthy, on the other hand, both in the past and the present, is that which is untrue, fake, and inauthentic. This is claimed both by ancient philosophers and rhetoricians, but also from contemporary research in social psychology, persuasion, and rhetoric. In our contemporary world, sometimes referred to as post-human, the advent and increasing prevalence of artificial intelligence turns the traditional understanding of the trustworthy on its head. Artificiality is, per definition, untrue, fake, and inauthentic, and not least without physical bodies. Still, in communication and social interaction humans increasingly rely on, believe in, and trust, embodied artificial intelligence. This is a great dilemma: History, culture, and research has taught to believe and trust the real, the actual physical people in front of us; so why do people in our time believe and trust the artificial? This raises other important questions: What is it that makes us believe and trust AI? Is it because it is perceived as real? Because the voices and “bodies” of AI appear real to the user? Or is the kind of credibility and trust we attribute different from the kind we historically have attributed to real living people with bodies? In brief: why and how do we trust and believe bodily artificial intelligence? This presentation addresses this question by examining selected examples of embodied artificial intelligence.

Power to the Platform? AI and the Image Economy by Richard Misek

Generative text-to-video models, such as OpenAI’s recently-announced Sora, are on the verge of becoming widespread tools for video production. Once refined and made public, Sora and similar models are likely turn the image economy on its head, creating a seismic shift in the balance of power between creators, consumers, and tech platforms. Though AI companies emphasise how their products will empower consumers, the risk is that they will disempower creators and turn AI platforms into economic ‘chokepoints’ through which Big Tech will extract high profit margins at the expense of nations’ creative economies.  But there remains one significant check on the spread of text-to-video models: their hunger for high quality training data. Most trawlable video is relatively low quality and copyrighted. Most high quality video is owned by media owners (notably commercial archives) and held behind paywalls. This has resulted over the last year in a complex dynamic of interactions between ‘legacy’ media organisations and tech companies that encompasses both litigation and collaboration, with everyone jostling for dominance of the emergent AI image economy.  In this context, it is not surprising that OpenAI refuses to say where Sora’s training data came from. But if OpenAI cannot even be trusted to identify its source media, can it be trusted to control the global generative AI economy? Rather than focusing on the trustworthiness of images and media, this talk explores how far we can trust the corporate players whose current actions will shape the future media industry. This talk explores this question with particular attention to three key current players in this field: OpenAI, NVIDIA, and Getty Images.  

Summary by Isabell Stinessen Haugen

Moderator: Marija Slavkovik