I wrote a blog recently – On the Shoulders of Giants – about the way academics particularly in Liberal Arts disciplines are trained to base what they say on the work of previous thinkers, then to conclude with some original observations of their own. This derivative and I would say medieval approach places the accent on paraphrasing what has gone on before: it therefore works against original, out-of-the-box thinking.
https://www.evidentia.net/evidentia/on-the-shoulders-of-giants/
I find ChatGPT, a recent development in artificial intelligence, works along much the same lines – with the difference that it is ultramodern rather than medieval. This application, developed by OpenAI in San Francisco, reproduces the function of writers, by scanning millions of pages of web content within a few seconds, then generating grammatically correct and apparently coherent content that cannot be traced back to the original sources.
AI emulates human intellectual processes, taking them apart and putting them together again like a vast jigsaw puzzle. No doubt a lot of human ingenuity goes into that emulation. Microsoft recently announced it is investing US$10-billion in ChatGPT, which means the application is here to stay.
But ChatGPT works against original thinking, because it is always based on what has been written before. It merges the good, the bad and the ugly. It seamlessly combines the beautiful language of great authors with conspiracy theories developed by malevolent moral agents, yet it lacks an authentic human voice. In collecting millions of samples and piecing them together in plausible new combinations, it is not able to exercise judgment the way a human being would.
This application produces virtually untraceable outputs derived from existing works, and therefore raises the serious issue of how to protect the copyright of the real authors. It gives wannabe authors and students without talent or patience the wherewithal to produce simulacra of literary works, journalism, school assignments and even exam responses. It enables them to fake their way to producing ostensibly original works and eventually to obtaining a good grade and a diploma without the least effort.
You may ask: why should we complain about AI-driven text generation, if it means we instantly and effortlessly get what we are looking for, whether we want to write or read computer-generated text?
This question fudges the role of authorship. We have long relied on authors to provide us with meaning in life. Authors devote years of their lives to the craft of writing: they question and observe people and situations on our behalf; they develop critical thinking; they research themes that have never been written about before; they overcome the poverty of language – of words – in conveying the rich texture and density of experience, pushing back the barriers to understanding, and occasionally risking their lives in the process.
Moreover, some authors struggle with significant personal hardships in order to create works, and that is true of musicians, sculptors, painters and other artists. Think of Beethoven’s triumph over deafness in the musical realm. Other creators have struggled with slavery, illness, grief and persecution. Their life struggles give added meaning to their work.
But this is not the way the AI community sees it. I remember interviewing AI visionary Raymond Kurzweil for a documentary series a few years back: I was struck by one of his comments in particular. He said he wanted his keyboards, synthesizers and AI software to enable users to compose and perform music like Beethoven in the comfort of their living room. Kurzweil presented his technology to me as a force for democracy. He definitely helped Stevie Wonder and other people with handicaps. But Kurzweil also knew that AI music systems are a gold mine.
While doing the interview in Kurzweil’s office, I wondered how someone could convince himself he would become a musical genius, simply by spending a few thousand dollars on hardware and software, and reading an operations manual? Whereas I would say that sampling Beethoven millions of times with a computer, then piecing together the samples in endlessly new musical patterns, is not at all the same as musical creation!
Then there is the question of rarity. Authors produce a limited number of works, and we know that works are more valuable when they are rare. ChatGPT and applications like it risk trivializing culture, by drowning us in an inexhaustible ocean of works, all cloned yet subtly different, all based on the same scooping-up and reassembling technology, all lacking an original human voice.
By cloning culture, AI risks turning authorship into something meaningless.
Here are two articles I wrote recently on chatbots and authorship, one for Le Devoir in Montreal, the other for The Toronto Star:
https://www.ledevoir.com/opinion/idees/774527/idees-danser-avec-un-fantasme-sur-les-ailes-de-chatgpt