One of the reasons the creators of imitative AI systems focus so much on theoretical dangers, such as a super intelligent AI deciding we should all be turned into paperclips, rather than immediate harms is that by doing so they can avoid hard questions about the actual purpose of these systems. Because it turns out that they exist largely to help executives avoid accountability for unpleasant actions and decisions.
Since these systems are merely imitative, they are not going to produce anything really new, just copies of existing forms. If imitative AI had existed in Charles Dickens’ day, all of the output would look not like modern novels but like, well Dickens’ books. Every other character would be a street urchin with a whimsical yet meaningfully stupid name. If they existed before impressionism, all illustrative output would look like realistic pictures of fruit that most people of the time couldn’t afford. They could not create pop art or modern novels — they can only imitate what has come before, usually quite badly. So, the point of them is not to create, not really. The point of them is limit the amount of money paid to actual creative people in such a way as to minimize the responsibility of the decision makers for those choices.
If you “create” a first draft with an imitative AI program, then you save money by making a writer “rewrite” it and can avoid the responsibility for both the unimaginative framework of the story and the lack of writing. After all, an AI created both the framework and the efficiency of the first draft — whose fault is it that it now requires fewer writers to “rewrite” the material? The point is not art — the point is using technology to make decisions seem inevitable. Two recent stories highlight this dynamic.
First, researchers have recently shown that ChatGPT and Google Bard will quite happily regurgitate misinformation. In other words, they will lie util the cows come home, and then keep lying while the cows have dinner, watch the nightly news, brush their teeth, and go to bed. ChatGPT-4 repeated 98 out of 100 common pieces of misinformation (it kept lying right until the cows got up the next morning, apparently) and Bard repeated 80 out of 100.
But that cannot be the problem of the companies that produce these systems. They tell you right on the label, in nice small print, that the output of these systems may not be accurate. And so, if they happen to produce misinformation that flatters people enough to keep them happy and coming back to said products, well, that is certainly not their fault, is it? No, of course not — the AI produced those results in mysterious ways that no mere human could possibly understand or hope to be held accountable for, could they?
In a similar vein, a school in Iowa decided which book to ban from their libraries at the behest of ChatGPT. They didn’t want to take the time to actually have a person go through the material and so they asked ChatGTP if the book had sexual material and relied on its answer. They did this despite the fact that ChatGPT gives different answers to that question for the same book and despite the fact that ChatGTP is well known for lying in multiple different contexts. The tone of the responses from the school administration makes it very clear that they are not concerned with accuracy. What mattered was pointing the finger of blame somewhere other than at them. Authoritarianism via code.
That is the benefit of imitative AI systems to those in charge. By outsourcing decisions to machines, they place those decisions at a remove from themselves. That, in turn, allows those in control to pretend that they haven’t actually made a decision and thus to avoid accountability for their actions. The machines banned the books and fired the writers, not them. It is the perfect scam if we let them get away with it.
And worth every penny, to the people in power, that will go to the companies that make these systems and hide the culpability of the people who use them.
Leave a Reply