Home
Faculty of Psychology
Artificial intelligence

Information about the use of Artificial Intelligence at the Faculty of Psychology

It is important that you familiarize yourself with current expectations and rules regarding the use of artificial intelligence.

AI learning concept in server center
Photo:
Colourbox.com

Main content

The ChatGPT chatbot was launched in November 2022 by OpenAI and is an artificial intelligence text generator that can reproduce natural-sounding language. ChatGPT is an example of ‘generative artificial intelligence’ based on what is called a ‘Large Language Model’. The version launched in November 2022 was called GPT-3.5. However, in March 2023, an even more advanced version called GPT-4 became available as part of the paid version of ChatGPT. Other AI models that can generate text and images, audio and video, are or will soon be readily available to everyone. These models utilise machine learning and have been trained on large amounts of data, and can be used for everything from answering complicated questions to writing summaries or short stories.  AI models are therefore very versatile tools that can be appealing to use in different contexts.

These generative-AI tools have evolved quickly and will affect many of the everyday ways of working we are used to, including in the field of education. When should the use of tools such as ChatGPT be considered cheating and academic misconduct? In what contexts do they become useful tools that students and researchers want to use? When should there actually be an expectation that students learn to use tools like ChatGPT as part of the digital skills future employers will expect them to have? We don’t yet have answers to these questions, and we are in a transitional phase where students may find that the rules and expectations regarding the use of AI tools are changing, and may vary from course to course and situation to situation. It is therefore important that you familiarize yourself with what is expected of you and what has been communicated by the course and programme coordinators at any given time.

At the University of Bergen and at the Faculty of Psychology, we expect everyone to act with academic integrity, and cheating is seen as a serious breach of trust in relation to fellow students, the university and society as a whole. It is therefore important to always be aware of what is permitted when it comes to the use of supporting materials and tools in assignments and exams, including generative-AI tools such as ChatGPT. As a general rule, students must submit their own work consisting of their own reflections and analyses in order to develop and improve their own competence.

In addition, it is important to be aware that generative-AI tools also have clear weaknesses. Here, we will only point out three of these weaknesses, but be aware that there are also others: for example, ethical questions about feeding our own work into generative-AI models; possible problems regarding data protection; and social inequalities in terms of who is able to actually access the tools.

1. ChatGPT and other text/image-generating AI models have no access to ‘reality’. They work by statistically predicting the most likely next word, based on the data sets on which they have been trained. This means that you can never trust the sentences that the tool creates – all content must be verified. 

2. One of the most obvious weaknesses of ChatGPT (and similar models) is the limitations of the data set. First, the company that has developed ChatGPT, OpenAI, has given almost no information about the datasets on which the models are trained. According to OpenAI, ChatGPT has (at this time) no way of responding to current topics since the dataset only runs up to 2021. More importantly, the datasets (e.g., parts of the internet such as Wikipedia) are human-made, and anything that is human-made is characterised by human bias and prejudice. In addition, a lot of what you can find on the internet is simply wrong: misunderstandings and misinterpretations that may well be incorporated into the training materials. It is difficult to be critical of sources when you don’t know what the sources actually are. In addition to problems with misinformation or disinformation, everyday biases are likely to be amplified in the responses that generative-AI models return.

3. ChatGPT also has major weaknesses when it comes to source referencing/citation. The source referencing in ChatGPT texts often turns out to be incorrect and in some cases completely fictional. This is due to the fact that ChatGPT only reproduces patterns in the texts it is trained on and does not (at the moment) have any possibility of actively checking that the sources are correct. Therefore, ChatGPT cannot be trusted when it comes to correct source referencing.

With the advent of these tools, the boundaries between what is one’s own work and what is not are shifting. For example, using Google Scholar and other AI-controlled search engines to find relevant references and texts is not considered problematic, and AI-controlled spelling and grammar checks are built into the software that we use. However, both students and researchers need to take responsibility for their own work. If you are unsure whether use of a generative-AI tool is a breach of academic integrity or not, it might help to think about how you might go about getting help or input from a fellow student or another human being: where do you draw the line? Using a fellow student as a conversation partner to talk through ideas is often fine, but submitting a piece of work that you have not written yourself is never, or almost never, acceptable. Is it okay to ask a fellow student or others to write an outline for an essay for you? If not, it is because this process of writing develops the critical thinking that you are here to develop on your own. Learning to be academic also includes being critical of sources. You must be able to assess the source, and often we expect that you can also critically evaluate the method that has been used to arrive at the knowledge you wish to refer to. These kinds of things can be useful to reflect on: don’t forget what you are here at university to do.

With regard to student assignment and exams, the course description will state whether students are permitted (or expected) to use support materials, including ChatGPT or similar generative-AI tools. If their use is permitted or expected, the submission instructions will also state how students are expected to use the new AI-based tools.

The use of ChatGPT and similar models in an assessment where support materials are not permitted not only violates the University of Bergen’s expectations regarding academic integrity, but may in the worst case result in an annulled examination result and exclusion from the University of Bergen, as well as all universities and university colleges in Norway for two semesters.