5. Kaleidoscope

ChatGPT: friend or foe?

min
Daniel Cassany

Daniel Cassany,
Professor in the Department of Translation and Language Sciences

ChatGPT has had the most high-profile roll-out ever. No other tool had made such a splash. In just a few weeks, we learnt about it, we tested it (can it pass the university entrance exam? how about biology?), tutorials have popped up on TikTok, and everything from labour catastrophes to institutional bans have been predicted. 

It is impressive how you can chat with it in colloquial Catalan or other languages and it will answer you without missing a beat, seeming all-knowing. It is like chatting with a contact on WhatsApp, with its friendly tone and prose that flows from left to right, top to bottom. 

In contrast, I am less impressed by how it defines concepts, gives examples, summarizes novels, translates and corrects texts, or elaborates on academic topics fairly well. Other tools could already do that! What is different this time around is that ChatGPT does it all, that it seems more human and that it saves you the hassle of choosing from several results. It returns only one (albeit sometimes containing multiple perspectives), which is presented as the Truth with a capital T. (And this I like even less... because it fosters blind acceptance and discourages critical thinking. I prefer for machines to let me decide for myself.) 

You need to dig a bit deeper to find its limitations. Some colleagues have detected ‘hallucinations’ in terms of made-up references or false assertions (e.g. that Mexico City has a beach). From the get-go, the interface tells you that it is unfamiliar with the immediate past (because there are not yet enough data on the Internet nor is there time to learn them) and that it ‘occasionally’ makes mistakes. The writing style is odd, with mistaken anaphora, overuse of textual markers, better formed paragraphs than those written by humans, uneven registers, etc. ChatGPT promises to improve and aims to feature prominently in virtually everything someday. We’ll see...

We will have to learn to ask it good questions, through trial and error, and we will have to scrutinize each answer with the exacting eyes of a professional proofreader lest it fool us

For now, we need time to get to know it, to determine how far we can trust it and how to integrate it (or not) into our daily lives. We will have to learn to ask it good questions, through trial and error, and we will have to scrutinize each answer with the exacting eyes of a professional proofreader lest it fool us. We will also have to clarify to it our role in each case, to ensure that the single answer it returns is properly conceived: are we asking as scientists, students, citizens? It is as if we were starting a new professional relationship...

As teachers, we will need to continue along a path that we had already started. I have little faith in bans. On the contrary, this natural (and mischievous) interface will beguile many students. We will help them much more by incorporating the tool in class, teaching them to use it, but also to mistrust it and to emphasize the most valuable part: personal interpretations, comparisons between concepts, reasoned arguments, critical thinking, and the ability to speak in public. We will also need to rethink our instructions: assigning a final paper on ‘disinformation’ or ‘outsourcing’ is like asking for a verbatim copy... We need personal assignments, rooted in the now, involving field or laboratory work, research questions, current data, reasoned arguments. That is how ChatGPT will become a good friend.