Everyone seems to be freaking out about Large Language Models lately. These troll the internet and have conversations with users based on the information they find. Basically, for any word, the look at what words typically come after those words and put together grammatically correct sentences in that way. ChatGPT is the most famous of them. A lot of people are playing with it and worried about the future of writing. The problem is that, often, people cannot tell the difference between the bot’s writing and a real person’s writing. This means current jobs may become unnecessary. If a computer can write news copy, why do we need reporters and editors? If students can get term papers from a computer, they get underserved grades.
I’ve played with ChatGPT a little bit. I thought it might be fun to just see what kind of answers it gives me to questions I pose. Here’s an example:
Q: What makes a live performance electric?
A: There are many factors that can contribute to the “electric” feeling of a live performance. One important element is the energy and enthusiasm of the performers. When the musicians are engaged and passionate about their music, it can be contagious and create a lively atmosphere. The audience’s energy and involvement can also contribute to the overall feeling of a live performance. When the audience is excited and responsive, it can create a sense of connection and electricity in the air.
Other factors that can contribute to the electric feeling of a live performance include the venue, the sound quality, and the stage production. A well-designed stage and lighting can enhance the overall experience, and a high-quality sound system can help to ensure that the music is crystal clear. Ultimately, the “electric” feeling of a live performance is a combination of many different elements, and it can vary from one performance to the next.
ChatGPT
That’s an acceptable answer to the question. It’s impressive in a way. But it leaves me cold. First of all, it’s too long an answer for the question. Only a very annoying person gives a two-paragraph answer to a simple question. I would have stopped reading after the first few sentences if I weren’t writing this. Second, it’s clear that ChatGPT has never experienced an electric performance. There’s something ineffable about it that isn’t even hinted at in the response. Third, related to the first, there’s just a lot of unnecessary detail. In other words, ChatGPT is banal, naive, and pedantic. (If you’re wondering, I would answer the question by saying that an electric performance is when there is a great interplay between band and audience.)
I also participated in a little test created by Eric Schwitzgebel. This time several questions were asked to both Daniel Dennett, a philosopher, and to a chatbot. In the test, the user had to try to tell which answers were Dennett’s and which the computer’s. I did better than random chance at picking out the real from the computer generated, but it wasn’t easy. The thing is, the machine was fed Dennett’s entire corpus. It was primed specifically on Dennett. The other thing is that they gave the computer four chances to answer each question. Most of the questions had one or two obvious bad imitations. One that was OK, and one that was a good imitation. That means the machine only did a good job 25% of the time after having been set up on Dennett. It’s impressive, but not that impressive. It reminds me of the million monkeys with the million typewriters except it doesn’t need a million years.
I guess what it all boils down to for me is that I really don’t care what a computer has to say on any topic. It’s unoriginal and thoughtless. I understand why the people who feel threatened by ChatGPT feel that way. But there are workarounds. Students can do more writing with pen and paper, in class. Maybe the straight news stories will be computerized, but it can’t replace editorial content. And it’s nowhere near writing anything artistic. The fact that it can write things indistinguishable from real people just shows that most people aren’t writers. We’ll have to see what the future brings.
It’s true that large language models have the potential to disrupt certain industries, but it’s important to note that they are not yet able to fully replicate the creativity and nuance of human writing. While they can generate grammatically correct text, they are not able to fully understand context and the subtleties of language in the same way that humans can. It’s also worth considering that these models may create new opportunities and industries, rather than just replacing existing ones. For example, the development and maintenance of these models could create jobs in the tech industry. It’s key to approach the advancement of technology with both caution and an open mind, and to consider the potential impacts on society and the workforce.