Home
Pandemic Centre
News

Camilla will explore the use of AI-models in health care

Master's student Camilla Anna Simonelli from the University of Pavia will spend the next half year at the Pandemic Centre. There she will finish her thesis focusing on the use of AI-models in health care trained on pandemic data.

Camilla Simonelli (masterstudent)
INTERDICIPLINARY: Camilla Anna Simonelli will have an interdiciplinary approach when exploring the potential use of AI-models in health care.
Photo:
Paul André Sommerfeldt

Main content

The 25-year-old is a biomedical engineering student at the University of Pavia (Italy). She recently arrived in Bergen to collaborate with the Pandemic Centre while writing her final thesis.

– My field is artificial intelligence and health care. In my thesis I will evaluate the impact of explainable AI on decision making in health care.

– What is explainable AI?

– Explainable AI is a field that was born from the need to make complex AI-models more transparent and understandable for end users and developers. AI-models are often referred to as “black-box models”. This means that you can’t understand why it returns a specific output. This is critical in decision making such as health-care.

– Can you come up with an example?

– For instance, when clinicians will use AI-models for specific patients, he or she must be able to understand why the model suggests a specific diagnosis or treatment in order to evaluate. AI-models are not always correct, so you need to know what they are doing wrong, Simonelli explains.

Help in decision making

The AI-models she will be exploring are currently not in use in health care, but are likely to be implemented some time in the future. 

As a part of her thesis, Simonelli will do a survey among 161 residents at the hospital in Pavia. The key question is how the AI-models can help in the decision making when considering if patients presented to the emergency room should be hospitalized or sent home.

– The model was trained using data from COVID-patients during the first wave of the pandemic. This data is based on X-rays images and exams.

The clinical considerations on which patients that needed treatment in the hospital was also highly relevant during the pandemic. With limited resources, it was crucial to know which patients who could be sent home and not.

Not a replacement

The AI-models could potentially become an important tool in this kind of decision making in the future, but not as a replacement for human resources.

– This is why explainable AI and building AI systems that are user-friendly and transparent is so important. You don’t want to replace clinicians, but to argument their abilities by helping them understand what the model thinks or says, Simonelli points out.

At the Pandemic Centre, she will be followed up by Esperanza Diaz, Guttorm Alendal, Anna Oleynik and Prabhjot Kour.

– I will try to add a multidisciplinary approach to my work by interviewing stakeholders of the process, such as legal experts. By doing this I can investigate the local consequences of introducing a decision support system like this in a real hospital, Simonelli concludes.