0 0
Artificial General Intelligence and the Disconnect from Reality - Metaphors Are Lies

Artificial General Intelligence and the Disconnect from Reality

Read Time:4 Minute, 15 Second

For those of you who are not caught up in the minutia of tech company internal dramas (and what lucky people you are), OpenAI, the company behind ChatGPT among other programs, had quite the kerfuffle recently. The board fired its CEO, Sam Altman, with an ominously worded letter stating that he had not been honest with them. Within a couple of days, with the employees in and investors in revolt, Altman was back in charge with a board willing to rubber stamp whatever he was doing. What is interesting about all of this, to the extent that it is interesting, is how it shows how deep into nonsense the AI companies have fallen.

The “lie” that Altman told was apparently centered around the concept of artificial general intelligence, or AGI. Some of the board members felt that Altman wasn’t taking the threat of AGI seriously enough. Why? Because the company has created an imitative AI system that can almost do basic math correctly.

No, seriously:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

I almost don’t know what to do with that. The idea that a system you have to spend enormous amounts of computing power on in order to get basic math sometimes correct is an imminent danger is just nuts. It is not a first step on the path of AGI because it cannot be. And it is disturbing how many people think otherwise.

No, I am not an AI researcher. But I do know how these systems work, which is why I insist on calling them imitative and not generative. These systems imitate what came before, because that is all they are capable of doing. They cannot be a general intelligence because they can only work with the information they have been provided with and remix it. And that is not general intelligence, nor is it a step towards general intelligence.

These tools, unlike human being, cannot come up with new ideas. They can only remix what has come before and that is not the same thing. Not to pick n anyone person, but the economist and pundit Noah Smith recently expressed surprise that GhatGTP-4 could not come up with new ideas in drug repurposing, it could only repeat old ideas. Well, yeah. It is not an intelligence. It cannot figure out anything new, it can only guess what should come next based on its training data. It imitates, it does not generate. That he is surprised by this is a sad if stunning commentary on just how deep the AI propaganda has been driven into our collective minds.

None of this is to say that these tools are useless or do not do real harm. They can be and they are right now hurting real human beings. I used to think that the AI doomer position was merely cynical — merely an attempt to forestall or guide regulations in a manner that helps these companies avoid paying the price for the real harms that they do. But after reading reporting about what people at OpenAI apparently believed, it seems that some of them take this nonsense seriously.

Fortunately, we don’t have to. We do not need to pretend that the danger is some Skynet or Skynets that are going to break down human beings to make paperclips. We can leave those delusions to the side and focus on regulations that limit the harm imitative and other AI/machine learning/algorithmic systems do today, in the here and now. What is important is not some AGI that will almost never happen but the fact that these systems steal from artists, perpetuate discrimination in health care and welfare systems, mis-identify the faces of alleged criminals and generally provide and excuse for companies to hurt people in the name of “the AI must be right!”

The Sam Altman dram is a reminder that these companies need to be tightly controlled, because at their hearts, in their leadership, they seem to neither care nor understand the true harms these systems can and are doing. We don’t need to panic that a crappy calculator is a sign of the end of the world. We need to focus on the fact that these companies are already aiding discriminations, misinformation, and denying health care to people today, right now. We need to focus on the harms of today rather than the ludicrous fantasies of danger tomorrow in order to ensure that by tomorrow, these companies and systems have been done good for society.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.