Section 230, the Third-Party Doctrine, and the Looming Dark Age

Over the last week I’ve been working with my colleagues to rebuild Parler’s infrastructure, improve our guidelines enforcement process, and get back online. (Step one: a static web page.) I’ve also been making our case in the media—and in doing so have been mentally processing the injustice that has been done both to Parler, and to those who have relied upon Parler to be able to express themselves online. Thanks to those who have invited us to make our case, as well as to those—like the authors of this excellent opinion piece—who have contributed to my thinking on the relevant issues. (Standard Disclaimer: I speak only for myself here, and any errors in presentation or inference are mine.)

There are many staunch defenders of Section 230, which grants legal immunity to platforms for user-generated content, as well as for “good faith” decisions to remove or otherwise curate “objectionable” content. The above-linked piece (link again here) calls into question the wisdom of this immunity, at least when it exists alongside pressure on private companies from legislators to remove content from their platforms, when that content would otherwise be protected by the First Amendment. The authors cite legal precedent holding that conduct of “private” companies for which government grants immunity, and which government pressures them to engage in, is better thought of as publicly enforced conduct of a private company. And so, while many of us (I was included) have resisted referring to content moderation by private companies as “censorship,” we might need to consider calling it “censorship-by-proxy.”

Now recall that Mark Zuckerberg, in the most recent Big-Tech-CEO-Hearanguing before Congress, suggested amending Section 230 as follows:

  1. “Transparency” – each company enjoying Section 230 immunity would be required to issue periodic reports detailing how it dealt with certain types of “objectionable” content. 
  2. “Accountability” –platforms enjoying immunity could also be held to some minimum level of “effectiveness” with respect to dealing with that “objectionable” content. (Recall he also bragged about how effective Facebook’s “hate speech” algorithms are.)
Advertisement in December 8, 2020 New York Times

Perhaps you think “transparency” at least, is good. But imagine what information ends up being collected and retained as “ordinary business records” when complying with this sort of law, and read on.

In the last week-plus, we’ve seen a chorus of people blaming Parler, specifically, for threats or incitement in user-generated content. According to these voices, Parler’s not dealing with this content adequately was responsible for the inexcusable actions of a number of individuals on January 6. Setting aside issues of free will, consider the fuller factual picture that has since been revealed: Parler’s competitors’ platforms were also filled with this content, and some blame Facebook for playing a much larger role in facilitating the planning that led up to the 6th

Yes, that’s a Salon article. What does Salon hope to gain by blaming Facebook and showing sympathy to Parler? I argue that placing responsibility for user-generated content on platforms plays right into the totalitarians’ hands.

With all the platforms now being blamed for user-generated content containing threats or incitement, the new Congress needs only to accept Mark Zuckerberg’s engraved invitation to amend Section 230 along the above lines. But, as we’ve learned in the last week, no system of guidelines enforcement is perfect. If Facebook, with all its algorithms and other resources could not “adequately” deal with this content, then what company could? 

If it’s not actually possible to be good at this, to the standard that everyone seems to expectand Zuckerberg is calling for all of us to be regulated according to that standard, then what exactly is he calling for (whether he realizes it or not)? For government to take over, to have arbitrary control. For all online platforms to operate only by permission of government, according to whatever standards politicians (or the Twitter mobs pulling their strings) deem fit—and this will be true with respect to both free speech and privacy. 

As for free speech, not only has Zuckerberg invited “hate speech” regulation, I’ve also learned this week that the leading third-party AI solutions seem to be much better at detecting “hate speech” than they are at detecting threats or incitement. Perhaps this is because many platforms have elected to moderate “hate speech” more broadly? That is not, as many of you know, Parler’s approach. This is because the term “hate speech” is vague, and is generally held to encompass much speech that is protected by our First Amendment. We have all had a challenge, in the last couple weeks, determining what language, in which context, is “incitement” (“I love you”?) Imagine how subjective things will get when “hate speech” moderation becomes mandatory.

What has Facebook hoped to accomplish by encouraging this? I can only speculate that the company is trying to preserve both their data-mining practices and their rumored engagement-enhancing algorithms, upon which their monetization depends—and to hopefully keep it all under-the-hood, immune from discovery, via Section 230.

As for privacy, some say private companies don’t conduct “surveillance” when they enforce terms of service. Now I’ll remind you of the work I was doing before I joined Parler: promoting and deploying a novel solution to the problem of the “third-party doctrine.” The doctrine says that information a person shares with a “third-party”—such as a social media platform—is not protected by the Fourth Amendment, and therefore that government can obtain such information without a warrant.* 

Now we can predict what’s to come: Section 230 will be amended as Zuckerberg suggests. Given the current capabilities of AI this will likely mean that all platforms will be required to scan ubiquitously for “hate speech,” “misinformation,” or who knows what else. The results of these scans will become ordinary business records of the platforms, obtainable by government without a warrant. No probable cause, no particularized suspicion—perhaps nothing more than a “consent order” could result in routine access to these recordsMinority Report, anyone?

A few months ago, a colleague wondered whether we should engage in more activism. I said that just offering our product is plenty! At Parler, a crucial part of our mission is to collect only the bare minimum of user data, rejecting the business model of Big Tech as we know it today (with a few possible exceptions). In addition, given the total context—more of which I’ve come to understand only this week—we have been outspoken in calling for the repeal of Section 230. I hope more people will understand why we are seen as a threat to this industry—before it’s too late.

*This has been true only since the 1970’s, when the Supreme Court unjustifiably and without explanation transported the third-party doctrine from the context of information sharing in the course of criminal activity, to the context of information sharing within an ordinary business context.

7 Comments

Filed under Uncategorized

7 responses to “Section 230, the Third-Party Doctrine, and the Looming Dark Age

  1. Ed Powell

    I’m glad you’ve come to understand that what Big Tech is doing, based on prodding from certain NGOs and politicians, is in fact censorship, even if couched as “censorship by proxy”. I have held this position for years. The question is not “Is the government doing this censoring?” The question is, “Are the people who are in effective control of our government doing this censoring using private companies as proxies because there is still a modicum of honesty in the courts when dealing with First Amendment issues that prevents the government (temporarily) from passing anti-hate-speech laws.”

    If you look at the history of Section 230 of the “Communications Decency Act,” what you find is that the congressional intent of the license to remove “objectionable” content while still receiving immunity was to allow, indeed encourage, web sites from removing pornography, nudity, or other prurient content, NOT allowing them to curate speech based on ideas. Why no one has mentioned this in all the discussions astounds me. Why was the “Communications *DECENCY* Act” ever passed? To allow for a free flow of information without endless porn spam. (As well as Viagra spam, vituperative name calling, or illegal activity).

    Since you are a lawyer, you know full well that the standard for incitement from Brandenburg v. Ohio (1969) is speech that produces or is likely to produce *imminent* *lawless* action. The standard for imminence is quite narrow: it means “right now”, not “sometime in the future.” And lawless action means just that, action that is against the law. Advocating, for example, to walk to the Capitol and demonstrate peacefully in favor of a certain political action is neither imminent (it’s a 45-minute walk from the Washington Monument), nor illegal (since permits for the protest were obtained well in advance). It is certainly *possible* to use social media to plan imminent lawless action, but sifting that signal out of the broad noise of general twitfuckery is impossible for a human, and definitely impossible for a computer. If I sent you a Facebook message saying “Let’s you and I go attack the Capitol right now!” that might seem like an incitement to imminent lawless action, but then when you understand that I am an hour away and you are in Texas, that statement (though stupid) fails the test of imminence, and thus is not legally incitement, even if it is ill-considered. No computer can figure all this out. Thus the AIs are tuned to produce very few false negatives with the obvious consequence of producing an enormous number of false positives. Examples of these false positives abound on the internet.

    Nevertheless, the real issue is not the AIs (which are admittedly tuned to be stupid), nor the algorithms used, which are destroying these companies’ usefulness and thus profitability all anyway. No, the people being deplatformed are not being deplatformed by AIs, but by human beings. And it is the decisions of the human beings that is at issue, not the stupid decisions of automated systems. There are hundreds of thousands of vile death threats against people every day on Twitter. You personally know an individual who has received such death threats every day, and who themself was banned for the death threats they received, while none of the death-threat senders was banned!!! This result is not an error. Left-wing or jihadi terror groups are allowed to use Twitter and Facebook to plan terror attacks, plan riots, intimidate businesses and individuals, and threaten entire nations with annihilation with no consequence whatsoever.

    We tend to think of the deplatforming crisis as either an issue of free speech (akin to the First Amendment) or an anti-trust (which it is, more on that later). These are both true enough. But the real crisis is much more akin to the rest of the Bill of Rights, rather than the First Amendment. Big Tech companies have the right to search (and read) your private correspondence without your permission, unlike, for example, the Post Office, which has never had this power, and still doesn’t today even though it is technically a “private” company now. People build businesses on these Big Tech platforms, and have such businesses completely taken from them without anything resembling “due process of law”. They are banned. They “appeal.” They receive a form letter within minutes saying their appeal is denied, and there is no one to speak to. The companies’ “terms of service” purport to demand arbitration, but the arbitration process is so skewed in favor of the corporations that it is almost impossible to prevail on the basic tort of tortious interference in a business relationship (which these Big Tech companies violate routinely and yet are protected by “elected” California judges who are bought and paid for by these same companies). People are banned for things they said years ago, thus not giving them the chance of a “speedy trial” or recognize the legal doctrine behind statutes of limitation. No one is ever allowed to “confront their accusers.” Indeed, given that these deplatformings are almost always driven by NGOs like Hope Not Hate, the SPLC, or the ADL, they are never even allowed to know who their accusers were! They are not allowed to produce witnesses in their favor, indeed they are not allowed to have any witnesses or even a hearing! Finally, the “punishment” is almost always the “death penalty” for their business with no warning or “strikes” or anything, making the punishment both excessive and cruel. Finally, moving to the business analogue of the 14th Amendment, no one enjoys “equal protection of the laws”. As described earlier, and from your own personal experience you know, that it is the *victims* that are often punished rather than the violators, if the victims have a certain philosophy and the perpetrators membership in certain favored groups. This is unequivocal.

    All of the things in the previous paragraph I lump into the general category of “due process of law.” In these cases of censorship and deplatforming there is no due process of law–none whatsoever, and it is THIS lack that dooms any attempt to improve the situation by trying to enforce a commitment to freedom of speech or attempts to make their ridiculously subjective rules more objective. This is true EVEN AT PARLER. A site could have completely provably objective rules, like “if you post the n-word, you will be banned.” Seems perfectly objective and straightforward, right? But what will that objective rule matter if it is unequally applied, if as Patreon proved, one group can use it and another group can’t, not in the rules themselves, but in the enforcement. Or if your account is hacked and the hacker posted the offending words (and you can prove it), but there is no ability to have a hearing to present evidence? Yes, free speech is very important, and a culture of free speech (“I disapprove of what you say, but I will defend to the death your right to say it”) is crucially important for a free society, and I applaud you and Parler for fighting this fight. But that is only 1/3 of the fight. The “due process of law” is the other 2/3 and no one seems to want to address it. Only when the Big Tech firms, like the Post Office, are subject to what is analogous to all the Bill of Rights (as Ayn Rand reminded us, all rights are integrated, and this applies to “civil” rights–process rights–as well as natural rights), we will not have freedom of speech in this country again.

    I want to say unequivocally that I reject “regulation” as a solution to this problem. Regulation almost never works as intended since the regulators are immediately subject to regulatory capture and thus do the bidding of the companies they are supposed to regulate rather than protect the interests of the citizenry. What is needed is federal standing to sue based on breach of contract (that is, these companies do not themselves adhere to their own terms of service, or apply them arbitrarily, capriciously, or discriminatorially), tortious interference in a business relationship, false advertising, and provide the ability to federally enforce state laws that these companies violate with impunity. Federal courts are the appropriate jurisdiction to legally fight these companies’ deceptive business practices, since almost all of their customers live in different states. Similarly, given the corruption of California state courts, only in Federal court can justice be done. This type of case is *exactly* the original purpose of federal courts. Repealing Section 230, or modifying it, or leaving it be does not address the real problem. Only giving a federal cause of action to consumers to force the companies to abide by their own terms of service equally to all consumers will have any effect. They can then, if they wish, declare themselves partisan organizations who only invite fellow partisans to join, and they can make explicit that anyone who does not agree with whatever the Democratic Party (or the New York Times, or the ADL, or Al Qaeda) is pushing that day–that hour–is not welcome and can be kicked off their platform. This result would be *much* better than regulation since it would legally force these companies to be *honest* in their dealings with the public. That’s all any of us have ever wanted.

    Finally, the collusion between Twitter, Amazon, the Democratic Party, and the ADL to deplatform Parler is a classic violation of the anti-trust laws. Now, we all know that there is a lot of ambiguity and contradictions in the anti-trust laws, but this case is not deal with any of those. This case is clear-cut out-and-out illegal. We both may agree that the anti-trust laws are immoral and should be repealed, but as long as they are on the books and have been applied to many good, honest companies, equal protection of the laws *demands* they be applied to evil companies. And Facebook, Twitter, Apple, Google, and Amazon are in this deplatforming campaign decidedly EVIL. I like the products and services of all but Google, but as the old saying goes, “a teaspoon of wine in a barrel of garbage is garbage, and a teaspoon of garbage in a barrel of wine is garbage.” The evil in these companies needs to be cut out entirely, or the companies need to be destroyed by consumer lawsuits (as described in the previous paragraph) or anti-trust lawsuits (pending the repeal of these laws). Evil spreads like a…well…a coronavirus. We all know what happens when firm measures aren’t taken to stop the spread at the beginning. As for Google, the entire company is corrupt and, as Hillary Clinton remarked, “irredeemable.” They are in bed with our enemies, the Chinese Communist Party, which they provide both technology to oppress the Chinese people as well as critical private information about American citizens. They are thoroughly penetrated by spies, profit off of the selling of our private information without our consent (click-through “I accept” buttons do not constitute “consent” because contracts require a “meeting of the minds” which is entirely absent in these click-through “contracts”), read our every email and document, and entirely abuse their users, as well as the United States as a country. The other companies I think can be redeemed by judicious use of the law. Google cannot be.

    Google delenda est.

  2. I definitely disagree with the idea that the Internet and the speech platforms ought to be regulated, but I would like to hear more about the violation of free speech by proxy that you mention, because I do not see it that way yet, though the Left seems to be acting towards that end, what with AOC wanting a ministry of truth for media, it is not far away. But certainly keep us informed, and I do think FB et al. are asking for trouble in wanting more regulations or an amendment to section 230 (to what end?). Besides, by what standard, government or private, is one to follow the adage that “objectionable” content can be prevented from being on the forum? I have to caution that the idea of censorship by proxy is a dangerous attitude to be taken, since the Internet servers and infrastructure is still (mostly) privately owned and operated, and not government sponsored, so how does that “by proxy” actually work? At the behest of the policies of the Left when FB et al. side with them and therefore removed “objectionable” content based on that standard? Don’t they have a right to run their platform according to their own ideals and standards?

    I do hope Parler gets back up and running and figures out a better way of keeping threats off their platform, the excuse of AWS to remove you from their servers, when, as you point out, it has happened on other forums as well. Double standard anyone?

    Keep up the good work.

  3. Ed Powell

    While Amy is understandably concerned about the deplatforming of Parler, it is important to note that there has been a metaphorical bloodbath in the last 10 days of deplatforming happening on the internet. This article give s partial summary:

    https://www.ar15.com/forums/General/The-first-coordinated-Internet-purge-2021-a-list-in-progress-/5-2411799/

  4. Ed Powell

    One other important article that goes to my point about the lack of “equal protection under the laws” in social media. Even Salon knows that most of the rioters used facebook to coordinate the DC riot, yet for some reason Facebook was never even warned by Apple, Google, or Amazon:

    https://www.salon.com/2021/01/16/despite-parler-backlash-facebook-played-huge-role-in-fueling-capitol-riot-watchdogs-say/

  5. I think I am beginning to understand your concerns about section 230 because you want Common Law to do that aspect of law for the Internet. Now that I think about it more, though I have not seen it put this way, why have regulations for the Internet at all in the first place? If it was all privately run and owned and operated, then no government involvement would be necessary because the Terms of Service could say that we reserve the right to cancel any account at any time based on our discretion (provided there was no actual contract). In this way, the Internet could set an example re regulations, that it would be legally free of regulations and not bound by Congressional Law, but by Common Law. That is, if one thought that one was libeled against or slandered against, then one could sue for damages — just who would depend on the context. So, just as some Objectivists do not think there needs to be a government run standards for weights and measures, so there would not be any need for government standards of how to run a website or a platform or a server or an ISP.

    Can you please elaborate on this aspect of your thoughts in a later entry? Thanks!

  6. Only my high regard for Amy’s thinking has me open to the idea of “censorship by proxy.” I read the OP and all of the comments prior to mine, but I still fail to see the connection. What is government doing to incentivize Amazon to suspend its hosting of Parler?

  7. Pingback: Newt's World - Episode 189: Social Media Censorship – What’s Next for Parler? | Gingrich 360

Leave a reply to Ed Powell Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.