Since the release of ChatGPT in November 2022, one of the main concerns regarding this chatbot is its impact on education. Assistant professors dr. Laura Ripoll Gonzalez and dr. Francisca Grommé are currently exploring the use of Artificial Intelligence (AI) by students in education. They consider how AI affects education, ethical issues, and the sort of guidance students need from the university, among other questions. Students are of a generation that grew up in the digital era adopt it into their routines quite easily, but they do face some ethical concerns.
ChatGPT is a form of generative AI (gen AI). This artificial intelligence lets users enter prompts to create original content, such as texts, images, videos or audio. Generally, students use the chatbot to revise texts, generate ideas, create texts, or retrieve information. Even though technology like ChatGPT can be enriching, it also comes with constraints and challenges.
Potentials of chatbots
Almost all students use language models such as ChatGPT in their education. The most popular use of chatbots is to generate ideas. Grommé explains that students also like the possibility of using a chatbot to reflect on their work. “Usually, their idea is not to generate a final product but to involve it in many steps in writing. ChatGPT works as a mirror for their work.”
Another advantage is that AI contributes to working more efficiently. Ripoll Gonzalez mentions, "It is possible to train a chatbot on a specific data set. You could then search by keywords to quickly scan for specific information, which saves time. This can assist more technical aspects of literature reviews, for instance", although both agree it also has disadvantages. Grommé points out: "Working like this attunes to the idea that we should work faster and put less effort into our work. I doubt if we can get satisfaction from working like this."
Plagiarism is the biggest concern
Many students wonder how they can use it in an integer and safe way. Plagiarism is one of their main concerns. “The ethical use of Generative AI is not only a concern for students, but also for academics”, Ripoll Gonzalez says. “I might use ChatGPT to help with suggestions to fine-tune my writing but not to generate ideas. Students may also do so. However, it is hard to discern when they submit their work to us, and the plagiarism software detects AI-use, whether it is still the students’ original idea”.
AI echo chambers
Another disadvantage, according to Ripoll Gonzalez, is ChatGPT being an echo chamber. Chatbots are inclined to echo the views of their users so they can feed our own bias back to us. People tend to ask chatbots specific questions, but on traditional search engines, people are more likely to use keywords. A complete question may unintentionally contain a prejudice since chatbots can be trained to extract clues from questions, so you can expect an answer that reflects your point of view. “In this case, language can be used against you”, Ripoll Gonzalez says.
Grommé adds that “these chatbots can also be biased because human beings have trained them. They learn from data, and their outcomes will be too if the data is biased.” Ripoll Gonzalez elaborates a little bit further on this topic by pointing out that “there is the additional issue of which information is fed to the language model, and whether we are respecting data privacy and copyright, for instance, when students use ChatGPT to process data from interviews as part of students’ thesis projects. This is an epistemological issue, redefining our relationship with technology as both users and creators of knowledge.”
Not all knowledge is available on the internet
Another disadvantage, according to Ripoll Gonzalez, is that we have a fallacy that all is available on the internet. “But that’s not true. Indigenous knowledge, for example, is not on the internet. Our students should not just trust that what is on the internet is enough.” Most information still comes from the Global North. “This also makes me think of this ethical issue of the paid environment of ChatGPT”, says Grommé. “Paid access gives you access to better resources than your fellow students in other countries or other parts of the world who don’t pay for it. Generally, universities in the global north have more money. This results in disparities in knowledge between various universities and countries.” Inequalities can develop among students because some can afford paid access while others cannot. Inequalities can develop among universities because wealthier universities can afford licenses for tailored AI applications.
Creating guidelines
Grommé explains that students are generally aware of the ethical issues regarding AI in education. “They are aware of the possible copyright violations, the bias, that sources can be scrambled up, that private companies are trying to profit from what we are doing, etc.” She sees that students are willing to take responsibility for it if we keep having open and transparent conversations about it. “In this way, we can set up some guidelines that we can use and keep on developing together. We think having open conversations with the students about using AI in education is important.” In any case, the university has already drawn up a basic user guideline for Generative AI and believes that plagiarism occurs when a student uses AI software without the examiner's permission.
Regulating AI is difficult (Collingridge dilemma)
Every technological change brings progress as well as problems. Hybrid working arrangements and video calling, for example, changed our norms for how we collaborate and what we expect from each other. However, regulation and management still need to catch up with these changes. The same applies to almost all innovations, including AI. It’s possible to change the direction or regulate new technologies. But it is hard to know what the effect of the technology will be. Once the effect is apparent and the technology is embedded in society, it becomes harder to change it. This is called the Collingridge Dilemma.
Regulating AI is very difficult, Grommé explains. “Our norms are already changing with technology before we make the rules. And when we make the rules, they don’t fit the norms anymore”, she explains. It is possible to successfully regulate a given technology when it’s still young and unpopular. “Therefore, we must act quickly and keep talking to each other”.
Having listed all these disadvantages, amplifying the importance of open conversations is essential. “Together with students, but also among universities. Maybe even with chatbot developers”. In this way, we can guide our students on how to use it responsibly. ChatGPT is already embedded in society. Therefore, it is essential as a university to regulate it and create clear guidelines”, Grommé says.
- Assistant professor
- Assistant professor
- More information
This interview is part of Spark. With these interviews, we aim to draw attention to the positive impact of the faculty's education and research on society. The stories in Spark give an insight into what makes ESSB students, alumni, staff and researchers tick.
Contact: Britt van Sloun, redactie en communicatie ESSB, vansloun@essb.eur.nl