0 0
A Hidden Danger in AI Failures - Metaphors Are Lies

A Hidden Danger in AI Failures

Read Time:3 Minute, 50 Second

Ars Technica has the latest ChatGPT failure. Not only is it very bad at diagnosing people, but it is also especially terrible when those people are children. When trying to figure out pediatric problems, the system has only a 17% chance of success. And yes, anticipating the inevitable “how do pediatricians do?” avalanche — they do a lot better.

There are two things of interest to me in the article. First, it highlights why imitative AI is not good at these kinds of diagnosis tasks. One would think that with enough data, everything would be perfect. First, the real issue is not data it is that these are imitative systems: they don’t actually know anything and so cannot make connections unless there is enough data to average out to the correct answer (the article uses the fact that autistic kids can often have vitamin deficiencies as something that the system could not reason its way through). But since they are imitating doctors, they come up with results anyway, filling their response with its best calculation about what should come next.

The research paper suggests that better data would help, and here is where the failure meets hype circle can be damaging. The paper assumes that if a system was trained on better data, then it would produce better results. Maybe. I think people underestimate the amount of data it takes to make these systems work — there is a reason OpenAI and similar companies swallowed copyright material whole wherever they found it — imitative AI systems require immense amount of data to work. Second, there’s not a lot of evidence that these systems can get significantly better than doctors, even with better training data. Remember, these aren’t thinking machines, they are correlation calculators. Better training data would likely just produce something closer to the average, because the average is what these systems create.

And yet it appears we must hand over our medical records to allow these companies to try. That is explicitly what the researchers suggest:

Though the chatbot struggled in this test, the researchers suggest it could improve by being specifically and selectively trained on accurate and trustworthy medical literature—not stuff on the Internet, which can include inaccurate information and misinformation. They also suggest chatbots could improve with more real-time access to medical data, allowing the models to refine their accuracy, described as “tuning.”

Emphasis mine.

I do not want this. I do not want my medical records handed over to a company so that it can train a likely not as good replacement for doctors. I certainly don’t want hospitals feeding these systems my real-time data and hoping the result is tuned correctly. The privacy and security implications are horrendous — we know that these companies regurgitate copyrighted material, for example, and researchers find flaws that allow these systems to expose private data all the time.

There are other, better ways they can test these systems. They could use veterinary data. Yes, people aren’t animals, but the process is the same — calculate results based on past performance. If it is qunattively superior to human vets, then maybe move onto human trials. You know, the way all medical research works.

But because the AI hype tells us that these things are the next sliced bread, commonsense steps like, I don’t know, seeing if the things could possibly work before throwing away people’s medical privacy, aren’t even considered. (I am sure people more clever than I could come up with a protocol better than the vet idea, yes.) We must rush forward, handing our data and privacy over to dubious companies in the name of advancement that may not be likely, possible, or sufficient to benefit anyone but the hospital admins who fire real doctors and leave us at the mercy of imitative AI systems that are “good enough”.

The hype around AI is dangerous to society because it creates an environment where consideration of how the technology is created, tested, and deployed is considered illegitimate. AI is coming, they say, whether we want it to or not, so we must rush, rush, rush to make sure we get all the not yet proven benefits of AI or we will surely be left behind.

No, we mustn’t and no we won’t.

AI is just another automation technology. Useful in some cases, not useful in others. We aren’t required to do anything other than what we do with all other new technologies — judiciously determine how to use it best for everyone, not just OpenAI’s executives.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.