A circuit judge has just ruled that TikTok is liable for the harm material that its algorithm caused. Predictably, we are hearing about the End of the Internet As We Know It. First, don’t make promises you don’t intend to keep. Second, the ruling allows companies to continue to moderate as they see fit and to be shielded from liability for third party content that they do not push on users. And this is as it should be.
The facts in this case are pretty straightforward. TikTok recommended a dangerous video of a “challenge” and the child killed themselves attempting to challenge.
Some videos that may appear on users’ FYPs are known as “challenges,” which urge users to post videos of themselves replicating the conduct depicted in the videos. The “Blackout Challenge . . . encourages users to choke themselves with belts, purse strings, or anything similar until passing out.” App. 31 (Compl. ¶ 64). TikTok’s FYP algorithm recommended a Blackout Challenge video to Nylah, and after watching it, Nylah attempted to replicate what she saw and died of asphyxiation.
The question then, becomes, who is liable? Up until this court case, the answer would have been exclusively the producer of the content. No matter the harm that internet companies may have done via their processes and algorithms, as long as there was third party content in the mix, the provider/service was deemed harmless.
There has always been a great deal of harm done by these products based on algorithms and thus decisions that these companies have made:
- You can create a product that gets people assaulted and put at risk for assault over and over again
- You can push life-threatening content to children
- You can push terrorist supporting material to your users
- You can make money off pushing radicalizing material to your users, such as election denial
And these algorithms are decisions that the companies are making, let us be clear. An algorithm in this case is an expression of the business’s desires. Businesses that create these kinds of algorithms are making decisions about what content to show to users. It is not merely making third party content available to other users. They are putting their hands on the content to privilege some over others for the own business needs. Quite logically, the court found that such activity is not protected by 230. If you are telling people to look at this content, then that is on you, not the creator of the content.
Moderation is still protected, and moderation decisions still do not add to a firm’s liability. Your comment section will not make you liable for anything under this ruling, and nether will, for example, a chronological order of posts on a social media site. You can host today anything you hosted before this ruling. You merely cannot pretend that your algorithmic decisions are immune from liability.
Yes, recommendations may be trickier under this kind of ruling, but even those are not likely to significantly affected. They are likely to be less focused on outrage and engagement and may require more human intervention. It is a change in the business model of many internet products, but that is fine. The internet should not be a product liability free zone. Section 230 does not, in its plain language, remove product liability and it’s only been an egregious, business friendly set of judges that have created the “kill your users” for profit world we now inhabit.
This is a step in the right direction. I don’t know if it will survive the inevitable appeals, but it is important to remember holding companies to account for their products is not going to end speech on the internet. We hold other companies liable for the damage their products do. It has never been just that we allow internet companies to ignore product liability, to knowingly harm people, for a few dollars.
Leave a Reply