Benjamin Riley of Cognitive Dissonance has found a guide for using imitative AI in education. It is, unsurprisingly, horrible. Mr. Riley (for now, I have decided that this newsletter will be the New York Times. Minus the neo-baby ownership, both-siderism, actual reporting, and word games. But you will get Mr. and Ms. before each last name!) does a great job highlighting the specific problems with the guide, but I also think its existence highlight a more general problem with imitative AI.
First, go read the newsletter — I can wait. the cats are demanding I play with them anyway and they are scary when they are angry.
Back? Good — the cats are getting vicious.
There are, as you saw, a lot of misguided ideas in that document. The notion that kids learn better with bespoke examples as opposed to in a group of their peers — that is false. The idea that having AI simulate experiments rather than the kids doing the experiments themselves is almost Laural and Hardy level humor. But even when you can argue that the pedagogical imperative is not being trampled on by this guide, the fact remains that nothing suggested in it really benefits from imitative AI.
Take this as an example: in the middle school social studies section, it is suggested that students use imitative AI to create pictures relevant to their own cultural heritage and then demonstrate them to the class in multimedia presentations. Okay, art can be a good learning tool, and it’s always good to encourage a sense of accomplishment in kids — doing any kid of art, drawing, collage, poetry, etc. is one way to do that. But you don’t need imitative AI to do create art. In fact, relying on something to create the art for you is unlikely to engender feelings of accomplishment.
Or take this idea from the grade school social studies section: students roleplay as a civic leader and participate in teacher led discussions where the imitative chatbot asks the students questions about how to setup the ideal society. Why in the name of Turing would you need an AI to ask students thought provoking, if age appropriate, questions? You have a teacher right there, you dipshits. But that is probably the point — the people who created this document don’t seem to think highly of teachers. Which fits — the people who make these systems, or at least the people who run the companies that make these systems, don’t seem to like creative, thinking people very much. Why would professional teachers — a job that requires creativity and empathy — be safe from their scorn?
That is a moral failing, but as I think you can see, this document also highlights a business failing. Imitative AI models are not cheap to run. The tasks in this document reserved for imitative AI are either counterproductive, meaning eventually people won’t pay for them, or they still require a teacher to be fully present. There is no real savings or productivity advantage to using imitative AI, as made clear by the imitative AI firm that wrote this document. Even if you think that personalized education is more valuable than group education (again, in almost all situations, it is not, especially for children), creating individual plans from an imitative AI still requires a knowledgeable person to screen them for errors, inappropriateness, and personalization. Even writing the correct prompts for each kid would require some knowledge of the individual children. A teacher could use a learn at your own pace system to tailor to each child instead. There just isn’t a lot of there in AI tools.
It is not just an education problem, but this document clearly highlights the overall problem. There isn’t enough money in imitative AI to justify the huge spending companies are shoveling into it. I suppose this document could have just been a quick cash grab, with little thought placed into its writing. I doubt that — every large organization is pathological about looking at alternatives when making big decisions. I suspect this document is about par for the course in the imitative AI educational field. Given that major financial firms are starting to publicly say that imitative AI is a bubble, I think we can be relatively safe stating that this document shows real problems with imitative AI as a whole.
I am not surprised the Chicago Public School system asked for an AI guide. The pressure on public officials by deep pocketed tech firms to throw taxpayer money at their cash flow problem is intense. But these things aren’t worth the money being spent on them, as is pretty clear from this document. We don’t need to pretend that “be a teacher under a teacher’s control” is invention on the level of the transistor or sliced bread. It’s just not. And that becomes clearer and clearer every time imitative AI shows itself in public.
Leave a Reply