In a faculty workshop on ChatGPT, we collected a list of ethical aspects of the use of ChatGPT in teaching (i.e. asking students to use ChatGPT) and in academic practice (programming, writing, etc.). This is the first version of the list we came up with, augmented with references by me. I would kindly ask you to use the comments (or @peterpur@hci.social) to annotate it and/or suggest additional items.

Privacy: OpenAI is not an open source initiative, but a commercial entity kickstarted by investment money that is supposed to generate revenue. Among its founders are some of the most notorious people when it comes to privacy (Peter Thiel, also founder of Palantir; Reid »Privacy Is for Old People« Hoffman, Elon Musk), and has been labelled »privacy disaster waiting to happen« [1].

This of course also recalls one of the first instances of a critical perspective on information technology, Jo Weizenbaums shock when he witnessed how people interacted with Eliza [2].

Sustainability/Footprint The cost of running systems like ChatGPT are quite substantial. It has been estimated that running ChatGPT costs around $100.000 per day [3], most of which of course goes into energy use [4]. This is quite a footprint ChatGPT leaves behind.

Cost: While there is, for now, still a free version of ChatGPT, there already is a paid tier with a higher-quality language model. This can cause social problems when some students can afford to pay for better versions, while others have difficulties to do so. Additionally, the high cost of running ChatGPT increases pressure to monetize the data collected by OpenAI about the users and the use of ChatGPT.

Ironiy of automation is a concept first described by Lisanne Bainbridge in 1983 [5]. It includes the problems created by the creeping loss of competencies of human operators in automated systems. Basically, the more automated a system is, the more the competencies of human operators are eroded because they don’t hone them im in everyday operations. This effect has been observed and made responsible for a number of disasters in the history of industrial production [6].

In our context, using ChatGPT or simiar systems to automate certain aspects of programming, writing, or problem solving, leads us down a road where the irony of automation will hit us hard: learning to program is mostly done through programming; developing a writing style is learned through writing; critical thinking skills come from applying critical thinking; creativity can be trained like a muscle [7], and like a muscle it can wither. By using ChatGPT to program, write, analyse, be creative, we on the long term do damage to our abilities to do these things ourselves.

The attribution of Authorship for artefacts created with generative ML-systems is controversial. As an immediate consequence, ChatGPT is now banned from being a co-author in many journals [8]. Noam Chomsky recently even called it »high-tech plagiarism« [9]. As the text ChatGPT generates is a statistical mash-up of a large number of similar texts, this position has some merit.

Hallucinated facts: Using a large language model (LLM) to create content has the problem that whatever comes out has no factual basis per se, but just from statistical inference. This leads to a known problem where it says very wrong things with great authority; this has been referred to as hallucination [10] and it is a known problem in generative ML systems. The question whether LLMs can be too big, and become dangerous because of it, has been posed even before ChatGPT was public [11].

A way to avoid such problems might be to use LLMs to rewrite text, thus benefitting from the high language »skills« of such systems. This looks like a good way to avoid the generation of false »facts«. Also, it can be argued that the authorship of such a text lies firmly with the author of the original text.

It has been proposed that ChatGPT can be used as a tool to »speed up« writing, especially the for most tedious parts like the very start (writers block), the »related research« section (where originality is rather undesirable) or the generation of ideas. In such cases, the line where the author crosses over from being the actual author to becoming the user of a system that basically generates plagiarised text is hard to place. More discussion is needed here.

Quality: It was suggested that ChatGPT puts us on a road to long-term mediocracy: when more and more text on the internet is generates using ChatGPT, the share of generated text used to train ChatGPT will inevitable rise. Since ChatGPT, following it’s technological nature, mostly writes »average« text, the output of ChatGPT is bound to trend more and more towards the average. This has also been suggested that this »heralds a new era of mediocrity and stagnation« [12]

References

Note: all links last accessed on 20.2.2023)

1 Glyn Moody: ChatGPT Is a Privacy Disaster Waiting To Happen. https://www.privateinternetaccess.com/blog/chatgpt-privacy/

2 Jeremy M. Norman: Joseph Weizenbaum Writes ELIZA: A Pioneering Experiment in Artificial Intelligence Programming. https://www.historyofinformation.com/detail.php?id=4137

3 CIOCoverage: OpenAI’s ChatGPT Reportedly Costs $100,000 a Day to Run. https://www.ciocoverage.com/openais-chatgpt-reportedly-costs-100000-a-day-to-run/

4 Florian Bock: Die dunkle Seite der KI. https://orf.at/stories/3303661/

5 Lisanne Bainbridge: Ironies of Automation. In: Automatica Vol. 19, No. 6, pp. 775-779, 1983. https://ckrybus.com/static/papers/Bainbridge_1983_Automatica.pdf

6 Tom Connor: The Irony of Automation. https://medium.com/10x-curiosity/the-irony-of-automation-e5f819d9aaba

7 Zorana Ivcevic Pringle: Build Your Creative Muscle. https://www.psychologytoday.com/us/blog/creativity-the-art-and-science/202002/build-your-creative-muscle

8 Ian Sample: Science journals ban listing of ChatGPT as co-author on paper. https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers

9 Open Culture: Noam Chomsky on ChatGPT: It’s »Basically High-Tech Plagiarism« and »a Way of Avoiding Learning«. https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html

10 Vilius Petkauskas: ChatGPT’s answers could be nothing but a hallucination. https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/

11 Bender et al.: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In: ACM FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada March 3 - 10, 2021. https://dl.acm.org/doi/proceedings/10.1145/3442188

12 Jack Watts: ChatGPT: fostering mediocrity since 2022. https://medium.com/@jack.a.watts/chatgpt-fostering-mediocrity-since-2022-b55033a860a4

Image: using an artificial intelligence to support scientific writing (midjourney)

Comment