0 0
ChatGPT Firing and Insurance Rejections Show Problem with Imitative AI Discourse - Metaphors Are Lies

ChatGPT Firing and Insurance Rejections Show Problem with Imitative AI Discourse

Read Time:5 Minute, 6 Second

Sam Altman, the weirdo who wants to scan the retinas of everyone on Earth and put them on the blockchain (I swear, none of that was satire) and was also the CEO of OpenAI, the company behind ChatGPT, got canned last week. The reason for his canning and the huge coverage of it shows that the AI industry and the media covering it is fundamentally unserious. Especially in light of a story about how AI is being used to worsen the health of people for profit.

Altman was fired unexpectedly last week by his board with a statement that implied he had been dishonest with them. We all settled in for a juicy, if largely inconsequential, story about financial malfeasance or inappropriate personal behavior. What we got was infinitely more stupid. Before the end of Friday, the board was trying to get Altman to come back and it turns out that he was fired because some board members thought Altman was moving too fast towards Artificial General Intelligence. The implication being that both Altman and the board thought they were close to AGI — an artificial intelligence that can think in general terms like a human does. I promise you, they are not.

OpenAI’s products are not intelligent. They are clever prediction machines — they cannot produce intelligence because all they can do is copy. They can only determine what the math says is the next best pixel or word based on past behavior. I have talked about this before, but that is not intelligence and cannot be turned into intelligence. If you fed a picture into ChatGPT, it would gamely try to figure out that word it represented because that is all it has been trained on. It can never independently integrate new, different kinds of data like people can. Bet IBM Watson a hundred bucks that you can beat it as chess if the pawns can now move as far as they like and no one at IBM would take up your challenge. I doubt that AGI will ever happen, but if it does, it won’t, it can’t, come from the simple word/pixel calculators that are the state of the art in imitative AI.

This is not to say that the work isn’t impressive (though it largely about brute force. The sheer amount of training data required to get these things to even approach functionality is why OpenAI is, well, open about its desire to never pay for the use of the copyrighted material it used to train its models. My engineer’s soul is always more impressed with efficiency than brute force and these are not efficient systems in any sense of the word.), merely that it is not actual intelligence and that it will not lead to actual intelligence. It is clever, if brute forces, math applied to a specific class of imitative problems. That Altman and the board, and the larger AI community, including the media, are so focused on fanciful pie in the sky, by and by issues is a condemnation of the entire industry.

Because another story came out at the same time and is getting a tenth of the attention but shows the real harm in AI, right now, today.

UnitedHealth, a US insurance company, has been caught using AI to kick people out of needed services. Essentially, the claims agents are heavily incentivized to rely on the AI system to determine when to end services. The model as a 90% error rate, largely because it ignores actual medical information such as people having setbacks or getting pneumonia in the facility or the severity of the initial injury/condition. Even when the claim was approved on appeal, the system and thus the claims agents would deny it again on the next review. This is the real danger of AI — the replacement of expertise by data that companies can hide behind as objective (after all, it is AI!) when they screw people.

The focus on AGI by so much of our media and the companies involved in these systems (and thus the people who tend to have the ears of politicians) show just how unserious the industry as a whole is about real the real danger of imitative AI and other machine learning systems. Skynet is not the problem, and the fact that OpenAI and Alman have been in a pissing match — a pissing match covered as if it was an election or a Super Bowl — over Skynet is both hilarious and depressing. Because these companies are doing real harm to people, right now, no Skynet needed. The services denied by UnitedHealthcare’s faulty system caused real physical and financial hardship to real people and is apparently continuing to do so. We don’t need to worry about AGI — we need regulations to stop this kind of harm right now, not fanciful dorm room arguments about how a word calculator is going to nuke the planet one day, maybe.

But real harm today is not, apparently, as sexy as freaking out over a fantasy. Nor, to indulge in my cynicism, does it focus attention away from what these companies are doing today. (I am not, to be clear, suggesting that the Altman dram is faked. People can be freaked out by something stupid and use that coinvent freak out to their benefit at the same time. It is easy to convince yourself that the harm that makes you money is not as important as the harm that might happen one day later, when you are done making money.) But the harms are real and they should be what we are focused on, not the internal politics of a company indulging itself in story time.

Personal note: this may be the last newsletter this week. I have a family member undergoing a medical procedure. Between that and the US Thanksgiving, I may not have time for any ore writing. If you are in the US, enjoy the holiday however you celebrate it. I hope you get time with your family and friends, in whatever combination of those terms makes you happiest.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.