A Black woman was arrested in the Detroit area for committing a carjacking that she could not have committed. What makes this story more interesting than that usual tales of police racism and incompetence is that the arrest was driven by facial recognition. It is a good example of how artificial intelligence and other algorithms can make humans functionally stupid.
The woman in question, Porcha Woodruff, was eight months pregnant when she was arrested for a carjacking that took place two weeks before her arrest. The police ran surveillance footage through a propriety facial recognition system and Ms. Woodruff came back as a match. (Her picture was in the database because she had been arrested for driving with an expired license.) The article does not say what percentage of a match she was listed at, but the match was enough for the detective on the case to put her picture headshot in a photo lineup. The victim identified her, and she was arrested — despite the victim describing the perpetrator as not pregnant.
This story highlights many potential problems in our criminal justice system. the prosecutor stated that the arrest was “appropriate based upon the facts” despite the obvious flaws in the arrest. According to experts in the article, picture lineups encourage false identifications because people assume that the bad guy must be present, because why else would the police being showing them the line up? Racism is ever present, of course — a Balck pregnant woman was arrested on flimsy evidence and treated so poorly at the prison she had to go to the hospital immediately for treatment upon being bailed out. But what really stuck out to me was how functionally stupid relying on facial recognition made the police.
The woman the police arrested would have been eight months pregnant at the time of the crime. At no point did the witness describe the person that carjacked them as eight months pregnant. Any halfway intelligent person would have noticed the giant protruding eight months pregnant belly on their suspect and, at a minimum, have asked something like “Hey, was the person who robbed you extremely pregnant?” before showing the witness any lineup card. But not our intrepid officer. The machine picked her out, and so she must be the one, physics be damned!
But even if you put aside the obvious pregnancy issues, facial identification is not perfect, especially when it comes to non-white, non-male faces. The idea that you can take the “match” (and, again, I should point out that nowhere does the article tell us what a match consists of or how certain the algorithm was about the match being correct, likely because the private company does not share that information publicly) and go right into a lineup with no intervening investigation is insane. And yet that is apparently what the officer did. There is no evidence that they made any attempt to validate alibies, check on her movements, or do any other basic investigations. They went from one poor means of finding a suspect right into another, arguably worse one and then arrested a woman without basic commonsense fact checking.
Relying on the machine turned them, functionally, into an idiot.
This is one of the deep dangers of relying on so-called artificial intelligence. It is most dangerous in circumstances like this, where the systematic incentives encourage the substitution of machine decision for human intelligence (remember that the prosecutor defended the arrest, meaning that the next time a similar situation arises, the police will have no incentive to change their behavior), but it can be present in almost every arena where artificial intelligence is used.
Take, for example, customer service. Already, much customer service is handled by chat bots or IVR (those irritating voice menu systems you encounter on those rare occasions you can find a customer service number). The machine tries to determine what the problem is and solve it as simply as possible before handing off the issue to a human being. But that route that has several poor effects. The full context of the problem is often lost or much harder to determine. Since the customer has been dealing with an impersonal, often useless algorithm no empathy or rapport has been built up. Building such rapport becomes much more difficult as the customer is irritated by having had their time wasted by the machine. The rep is then at least somewhat constrained by the expectation, conscious or not, that they will not go over the same problems that the system is supposed to handle, potentially missing subtle issues and wasting even more time. By interjecting an algorithm into what is at its core a human process, it makes the humans ultimately responsible for the end result less useful, less effective, functionally less intelligent than they otherwise would be.
Machine learning algorithms can be used to aid human intelligence. A system that helps a customer service agent quickly find information about a customer’s orders, for example, and correlates them with known shipping issues would be a boon. A system that streamlines paperwork for police officers and gets them valid warrants or automates the collection of data for FOIA requests would help both investigations and police accountability. But the people in charge have chosen a path that overly focuses on imagined efficiencies. As a result, they have created systems that lower the effectiveness and intelligence of the people who are forced to work with them.
And we all suffer as a result.
Leave a Reply