What Section 230 means to your favorite tech platforms and how products liability can clear things up once and for all
aka the one time that Clarence Thomas and I agree on anything
Well readers, I may get my wish. Social media platforms may change so dramatically that we will need to find something new to capture our attention. Also another wish was granted, as this topic marks the first time where the syllabus from my business law class, your survey feedback asking for more tech and more current controversies, and a specific reader suggestion all align. I guess it is time for me to buy a lottery ticket.
Before continuing to the topic at hand, please take a moment to reflect on the awesomeness that is George Peppard - from Breakfast at Tiffany’s to the A Team. He is proof that no one’s career should be a straight line.
Some history that hatched Section 230
The Supreme Court is poised to decide the future of Section 230, one of the few surviving portions of the Communications Decency Act of 1996. First, some history. Remember 1996? With its “brick” phones and an internet that had something like 12 pictures of cats on it? When we were all more focused on new TV channels like Fox News and the Sundance Channel1 or Dolly the sheep getting cloned?
What is Section 230 anyway?
At that time, the government and private industry were aligned in their efforts to find ways to protect this little fledgling seedling called the internet (invented by Al Gore?) and ensure it had a chance to sprout into a tiny tree.
Section 230 was born in the spirit of protecting little “start up” companies (the likes of Twitter, Google/Alphabet, Meta, etc.) that provide information to users some protection from liability - both for the information “served” to users, as well as protection from liability for the practice of what we now know as “moderating” content. At the time, we thought our biggest issue was Free Speech and making sure that the providers of internet tools were not in any way infringing on Free Speech rights.
Fast forward 30 years
It’s almost 30 years later, and we have a whole different set of problems. And the real question is, how much responsibility do search engines and social media platforms and companies have for the downstream effects of the information they provide? And, as algorithms continue to dominate the promotion of information, including biased methods and known radicalization by platforms, these “recommendation[s] within a closed and controlled platform move today’s online activities away from the metaphorical open public square” for which much of the regulation was created.
The Supreme Court cases at hand
The Supreme Court is currently hearing a few cases related to how much protection for what they “recommend” and publish should be afforded to Google, YouTube, and Twitter, among others.
The decision will be impactful. By allowing the liability protections to continue in their current form, technology platform recommendations can continue to serve up content that “promote[s] extremism, advocate[s] violence, harm[s] reputations and cause[s] emotional distress.” Conversely, reducing the liability protections could make providers responsible for the content that they recommend and display in a way that might severely limit their ability to operate and moderate their sites.
The facts of the first Supreme Court case at hand are very sad. Reynaldo Gonzalez’s daughter was killed during an ISIS attack carried out on a Paris bistro. Mr. Gonzalez sued Google under the AntiTerrorism Act for content provided by Google-owned YouTube that was used to radicalize and recruit ISIS members. He lost at the trial level, and at the appellate trial, with both courts finding Google recommendations to be protected by Section 230.
The Supreme Court agreed to the case and a number of amicus briefs2 were filed, including in support of Google’s position (of not being held liable) from companies like Craigslist, Automattic (who owns Wordpress and Tumblr) and ZipRecruiter, and in support of the plaintiff, Gonzalez, from the Seattle School District #1, the National Police Association, Inc. et al, the Zionist Organization of America et al and even a number of states (including one of the few times I suspect that I will ever see Tennessee aligned with California).
In the second related (and equally sad) case before the Supreme Court, Taamneh v. Twitter, the family of Nawras Alassaf allege that Twitter “aided and abetted” terrorism by allowing ISIS to recruit and train terrorists using their platform, and that this support of terrorism violated the same AntiTerrorism Act as alleged in the Gonzalez case. And of course, Twitter (aka the soon to be bankrupt dumpster fire where Elon Musk is allegedly selling plants to employees to raise funds) is arguing for protections under Section 230.
And, while all of this is happening, Joe Biden is again calling on Congress to remove Section 230 in its entirety.
Is product liability law the answer?
Like so many laws, the rules were created before the complexities of today’s world were anticipated. Product liability is no different, and was created on a state-by-state basis at a time when the world was powered by the manufacturing of “things.” As such, product liability does not currently extend to artificial intelligence and software code that recommends, displays, or suggests content. But should it? Contrary to Craigslist’s position, I think the answer is “yes” - and of note, Europe is already way ahead of us in this measure. Thus, a move in this direction would even be consistent internationally since most of the providers in question serve a global market.
WHO IS LIABLE UNDER PRODUCT LIABILITY: Products liability already holds the majority of the supply chain (manufacturers, wholesalers, retailers) responsible for the harm caused by a defective product. This means stores like Target (who are just an intermediary for displaying and delivering the products) who simply do things like “display” a weighted blanket can be liable for the harm caused by such a blanket. Thus, why can’t an “intermediary” presenting information (such as a social media company or other platform) also be liable, especially since they are a commercial party?
WHAT IS A PRODUCT FOR PURPOSES OF LIABILITY: Historically, products were tangible things that were manufactured and made. However, intangibles such as electricity delivered to a customer can be considered a product under liability law, so why not the intangible “product” of a technology provider, even if that product is a curated list of recommendations or suggestions based on things you are already deemed to be interested in.
CAUSATION: Product liability still requires causation, regardless of whether the particular law is founded on strict liability or negligence principles. Thus, a provider has to be the actual and proximate cause of the harm or injury to be held negligent or strictly liable under a product liability theory. This theory will be tested in a few other cases, including a California case where the mother of a young woman who committed suicide due to the mental health impact of “apps [that] are explicitly designed to exploit human psychology through the use of sophisticated algorithms and artificial intelligence,” a Pennsylvania case alleging that TikTok caused the death of a young woman who died after attempting the “blackout challenge,” and the Snapchat “speed filter” negligent design case in Georgia.
DEFECTIVE NATURE OF PRODUCT: There are three types of product liability: design defect, manufacturing defect, and failure to provide adequate instructions or warn. The two that are most likely fitting in the world of AI are a failure to warn or a design defect.
The first type of defect is a “failure to warn” of a dangerous circumstance. Here, just like tobacco makers are required to provide warnings on their packages of all of the dangers of smoking, the persons who put individuals in touch with the content that radicalizes them or leads them to injury could potentially be excused of liability by providing adequate warnings of the risk. This doesn’t alleviate the potential for harm, but allows the consumer to understand the implications of what they are doing (or in this case seeing).
Second, a design defect. This requires the “thing,” here a curated list of suggestions or recommendations, to be flawed in its design in a way that makes it “unreasonably dangerous to use.” Often, arguments against applying this theory to technologies focus on how AI grows and changes over time without human intervention, and thus the “product” was not defective when it left the hands of the developer. Again, product liability was not developed with machine learning in mind, but if a developer unleashes a tool that continues to grow and change based on any number of factors (which may also be heavily influenced by bias and profits), shouldn’t the “enabler” of the instrumentality still be responsible?
If this isn’t enough, keep in mind, in 47 states, the plaintiff has the burden of proof to prove the existence of a design defect. Thus, every instance where a person was injured or harmed after viewing a video that YouTube suggests needs to show something besides a proximity in time to prove causation.
There are two tests (i.e. defenses) often available to defendants to prove that their design was not defective. The first, a risk/utility test, allows the defendant to not be liable for a design defect if evidence shows that the product’s utility outweighs its inherent risk of harm. As an added bonus, to prove such a thing, companies would have to really test their products and algorithms instead of releasing Minimum Viable Products (MVPs) to an unsuspecting marketplace of users who then have to perform the duties of an “unpaid” product tester.
The second test, the consumer expectation test, where the defendant is not liable even if the “product” caused harm if “a reasonable consumer would not find the product to be defective even when using it in a reasonable manner.”
So, where does that leave us?
Availing a number of damage types to plaintiffs, and ensuring punitive damage potential based on public policy positions, we will be in a better position to ensure that profits are not the driving force behind everything presented to us under the guise of “free speech.” Just like when Dow Corning did not disclose the risks and production problems of their silicone breast implants of the 1970s, and when tobacco companies hid the health impacts of smoking, and even more recently, when Takata knew of the defects in their airbags for more than a decade, technology companies that know that their algorithms are biased, know they are driven by advertising dollars when they are cloaked in “research” purposes, or know that those products enable bullies or radicalize terrorists and other bad actors should not have an artificial shield against liability. If their “products” cause harm to victims they should be punished to the fullest extent of the law and not universally protected by an outdated law designed for very different reasons.
By limiting the protections of Section 230 to exclude harm or injury from the information presented by these platforms to users, while leaving protections for moderation intact, Congress and the judicial branch could ensure that the balance of power doesn’t tip too heavily toward the content “intermediaries” who have grown beyond the tiny sprouts we intended to protect in 1996 and have become a very invasive and aggressively expanding forest species. And, while I understand that these content “intermediaries” are the very core to a lot of mutual funds, ETFs, and other investment vehicles funding the retirement of every person, introducing product liability potential did not cause the bankruptcy of manufacturing and distribution of products, and certainly will not eliminate the opportunity for profit in the technology sector. It simply allows victims an opportunity to use products liability to prove that they were harmed by the defective design or failure to warn by the developer of a technology product, intermediary or otherwise.
The concept of expanding law creatively for a variety of situations is not new nor novel. For example, Federal Prosecutors have even used the Racketeer Influenced and Corrupt Organizations Act (better known as RICO) to address college admissions schemes, sex cults like NXIVM, and even neighborhood street gangs. So just like “Aunt Becky” and the Gambino family can be convicted by the same legal principles, victims of algorithms gone awry should be able to have their day in court just like if they were harmed by a cigarette they thought was safe or an airbag that exploded in their face.
And, just like I never thought I would see Tennessee and California agree on anything, I never thought I would agree with Clarence Thomas, but here I am doing just that. What is most perplexing is this: if liberals and conservatives align in their disdain for Section 230, the real question is, how does it still exist at all?
Buddy Guy knows the answer, just like he provided in “How to hire an effective advisor…”
And, if you like what you read here, please consider sharing it with your network. The larger the community, the better positioned we are to help each other succeed.
Bet you can’t guess which one I was watching….
For the nonlawyers who are readers, these “friends of the court” briefs are presented by individuals and companies who are not parties to the litigation, but who have strong positions that they wish to formally communicate to the court related to the impact of a particular decision.