0 0
Imitative AI and the Dangerously Mediocre Future - Metaphors Are Lies

Imitative AI and the Dangerously Mediocre Future

Read Time:3 Minute, 53 Second

I just read an excellent, if older, N+ article by Laura Preston. I am mostly writing this newsletter so that you go read the article. It details her time at conference dedicated to voice AI — basically chat bots of one flavor or another. It is an excellent piece of writing, and it reinforces the real damage that imitative AI will bring. A future of mediocrity where people are trapped behind chat bots and surveillance, forever cut off from connection and real help.

Preston found, I think, two basic categories of companies. The first are those dedicated to keeping people as far away from human connection as possible. They would not describe themselves like that, but their actual actions lead to that result. Chatbots that get between the user and a human who could help, who take the place of therapists, or customer service agents, or nurses’ aides are almost inevitably going to be used to minimize the ability of complaints to reach people who could solve those problems. A chatbot that is used for corporate training means no teacher for those confused by the generic lessons it will inevitably produce. A chatbot that is meant to answer leasing questions will inevitably keep the renter from the ever being heard by the property manager. We know this because Preston worked as a human overseer of such a chatbot and saw exactly that behavior.

Believers in the free market will claim that companies that treat their customers like that will go out of business. Those people have never tried to find Amazon’s customer service number, it seems. Many businesses are effective monopolies or take steps to make moving to a competitor extremely difficult. If it is cheaper to not solve the problem, to route it through chatbots, then the market will reward those companies that do just that.

The second type of company is, effectively, a surveillance company. One startup claimed that thirty seconds of conversation with their bot would allow them to detect depression, anxiety, and cognitive decline. When pressed on where the baseline used to determine these answers came from, the co-founder said that the markers were general to all populations. Nonsense, of course, but a convenient way to avoid hard conversations about bias and replacing the client specific with the population generic. And, of course, they had an agreement with Microsoft Teams to, apparently, monitor the mood of the people who use the tool. It’s not just a staff meeting — it’s a mini-therapy session.

The people creating these tools can be downright evil, like the facial recognition company that worked with the Chinese government to monitor Uyghurs or the firm that is working with Sudia Arabia on tools for use in its new city — the one that is displacing tribes and executing those tribe members that agitate against the city. Come see the future — an AI stomping on a human face, forever. But I do not think the obvious villains in these spaces are the most dangerous villains. I think, rather, that the companies that think they are helping are going to do the most damage.

There doesn’t seem to be any evidence that these people are thinking past their next round of funding. Preston reported that there was a lot of discussion about ethics, but only at the most superficial level. And none of her interactions indicated that anyone was really taking steps to prevent the worst-case scenarios, even when the companies were aware of the potential harm. These people are pushing us into a future where we are more isolated, less likely to have our issues dealt with properly, and more likely to be surveilled for not just our physical actions but out alleged mental states. It is a weird mix of danger and mediocrity.

I do not think imitative AI is going to lead to SkyNet. I don’t think it is even going to lead to massive disruptions. But I am becoming more and more convinced that it will remove the human touch from many interactions, leading to a much less useful and more frustrating future for most people. Add to that the creeping and creepy surveillance, especially around mental states, and the future AI companies seem to want to build is not one anyone would choose to live in. A future where you are docked for getting angry at the chatbot that doesn’t allow you to talk to your property manager to deal with the racoon infection on your attic. But unless we reign these companies in, it seems almost inevitable that they will build such a future — mediocre, isolated, unhelpful, and punishing.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.