0 0
Health Care AI Bias Shows AI Privileges Data over Expertise - Metaphors Are Lies

Health Care AI Bias Shows AI Privileges Data over Expertise

Read Time:4 Minute, 49 Second

A new study in Nature shows that LLM (Large Language Models, what most of think of when we think of AI — ChatGPT is the canonical example) in medical areas perpetuate racist myths about patients and treatments. This is another example of how imitative AI is going to lesson our ability to access expertise.

In theory, imitative AI would be a good choice to help in certain doctor-patient settings. Medical practice, at least the initial intakes, are generally precise in terminology and many diagnosis are straightforward. Many treatment suggestions are rote in nature. Imitative AI, then, has a lot of solid, static, applicable data to imitate. The problem is that a lot of that data is racist.

As the paper points out, a lot of medical material is still infused with racism. Sometimes it represents the era in which the initial discoveries were made. Sometimes it represents insidious beliefs, especially around pain tolerance, that persist today. Regardless, since the racist beliefs are present in the training data to be imitated, they inevitably end up being imitated in the outputs. And that is what the study saw (along with made up equations and completely wrong answers for reasons other than racism) — racism appeared in all the models to one degree or another.

This is the inherent problem with imitative AI. It does not actually learn anything. A human doctor has a chance to apply their expertise — racism is incorrect — to the data and extract the meaningful information or do deeper research to weed out where the racism is infecting the data. An AI system has no such expertise to apply — it merely imitates the data it was trained upon. Some companies do attempt to “wash” their model with human expertise, but obviously that is not enough given the results, and likely is a token effort to begin with.

You might argue that AI systems are less racist than regular old doctors. That might be true — the paper did not approach that question — but I doubt it. For one, imitative AI systems aggregate all the racism in the data into their models. Instead of just getting a doctor who has some outdated and/or subconscious biases but is aware enough to overcome some others, you get a machine that has the entire smorgasbord of racism to lay out in front of you, and sometimes will.

Worse, the racism coming from a machine may be more self-perpetuating. Studies have shown that people trust computer results more than humans, so an AI system has a greater chance of convincing patients and doctors that its racism is true. After all, it’s a computer — it can’t lie, can it? (reader: it most certainly can.) And AI companies are going to great length to argue that they cannot be held responsible for the output of their systems. Even when AI systems have made up libelous facts about people, the companies claim that they should in no way be held responsible. So, when a doctor is a racist, the organization he or she works for has incentives to correct that doctor, as they are liable for his or her behavior. If AI companies win these cases, then the AI companies will have no such incentives. We will end up with systems that can be as racist as they want with unclear lines of responsibility.

AI companies can only make money if they are paid to replace human labor at scale. Building these models requires paying modelers, for computation power, for electricity, etc. They are not cheap, and no AI company appears to be making a profit from them, even before we get into the question of whether or not they will end up having to pay people like writers and artists for their training data. Paying experts to vet their model and the paying programmers to ensure the vetting is persisted within the model has high costs. The best way for these companies to have a shot at being profitable is to rely on data, not on expertise.

The health LLMs are just one example of these. We have seen CNET get caught using an LLM that plagiarizes. Microsoft is using an imitative AI to place “news” articles on its home page, articles that do things like call a deceased NBA player “useless” in addition to flat out lying in its articles. Imitative AI systems used to help programmers are known to create code with well-known security holes. All of these problems could have been avoided by the application of expertise, but expertise is expensive compared to data. And so the AI companies do not use expertise. They attempt to replace expertise with a large amount of data, but that can only work if the data reflects the expertise. And given that LLM models require so much data, it is inevitable that expertise will be swamped by non-expertise. AI systems are already proving that is true.

AI systems can, when properly deployed and vetted, be beneficial. But the level of vetting required to make them reliably useful appears to be too expensive for AI companies to make a profit from their models, given all the other costs associated with creating imitative AI models. So, they won’t. they will continue to push out unreliable, racist models that people will trust more than humans, exacerbating our existing problems.

Unless we stop them. AI systems must be regulated so that these problems are found before these systems are allowed into the general public, that AI companies are liable for the output of their systems, and so that the inclusion expertise is a requirement. And of that means certain systems are not profitable? Well, I don’t feel compelled to allow companies to make money by poisoning society.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.