Ted Chiang, because he is the best writer of short works in the English language today, has a fantastic essay in the New Yorker about why imitative AI cannot create art. It is a similar argument to one that I have made previously: writing is thinking. Mr. Chiang argues that art is about making choices, and imitative AI is not, cannot be, making choices. But I’d also like to ruminate on an analogy in the piece and what it says about the intelligence in artificial intelligence.
Mr. Chiang, when talking about the value of originality, say this:
When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.
All of that is true and I think it highlights how far from actual intelligence imitative artificial intelligence actually is.
The words “I’m sorry” can have many different meanings. They can be a sincere expression or remorse, regret, and a desire to make things right. They can be perfunctory, a sing of disinterest in the person the apology is being said to or in the seriousness of the event that requires the apology. They can be a deliberate lie, a means of forestalling accountability or a way to con a victim or observer. What makes them one or the other is the intention of the person saying them and the context of the conversation. Imitative AI can have no intention.
No matter how well written an imitative AI apology may be (and it is not likely to be well written.), the machine that generated it cannot intend for the words to carry a particular meaning. The machine has no model of the world, remember. It is merely determining which word should come next based on its training data. It has no real context for the apology, no relationship with the apology’s recipient. It is merely a word dispenser, lacking any of the intentionality or world awareness necessary for intelligence.
Intelligence, broadly speaking, is probably best described as the ability to quickly adapt to new situations. These word and pixel calculators are incapable of adjusting to material they haven’t seen outside of their training data. Even within their training data they inevitably produce hallucinations — i.e. bullshit. the idea that these contain any intelligence, or that they can be leveraged into anything resembling intelligence, is just not realistic. If they cannot understand the context of the world, if they are not making deliberate choices based on that context, if they are unable to learn about and properly react to a novel situation outside of their training set, then they cannot be or ever hope to be anything even resembling intelligent.
Some people will say that the intentionality and context are provided by the prompt. This is not true, because the words chosen are not chosen deliberately and with intentions by the entity producing them. While you may intend to use your apology to mollify your boss (even if you aren’t sorry for calling Bob from accounting a walking condom ad. Bob knows what he did to your expense reports.), the machine itself is merely regurgitating words that are statistically likely to appear in an apology based on its training data. There is still not intentionality in the process, nothing that indicates an intelligence produced it.
SkyNet could say “I’m sorry” to mock the humans it conquered. ChatGPT cannot. And that is the difference between intelligence and Fancy Clippy.
Leave a Reply