Tag Archives: Third-Party Doctrine

Section 230, the Third-Party Doctrine, and the Looming Dark Age

Over the last week I’ve been working with my colleagues to rebuild Parler’s infrastructure, improve our guidelines enforcement process, and get back online. (Step one: a static web page.) I’ve also been making our case in the media—and in doing so have been mentally processing the injustice that has been done both to Parler, and to those who have relied upon Parler to be able to express themselves online. Thanks to those who have invited us to make our case, as well as to those—like the authors of this excellent opinion piece—who have contributed to my thinking on the relevant issues. (Standard Disclaimer: I speak only for myself here, and any errors in presentation or inference are mine.)

There are many staunch defenders of Section 230, which grants legal immunity to platforms for user-generated content, as well as for “good faith” decisions to remove or otherwise curate “objectionable” content. The above-linked piece (link again here) calls into question the wisdom of this immunity, at least when it exists alongside pressure on private companies from legislators to remove content from their platforms, when that content would otherwise be protected by the First Amendment. The authors cite legal precedent holding that conduct of “private” companies for which government grants immunity, and which government pressures them to engage in, is better thought of as publicly enforced conduct of a private company. And so, while many of us (I was included) have resisted referring to content moderation by private companies as “censorship,” we might need to consider calling it “censorship-by-proxy.”

Now recall that Mark Zuckerberg, in the most recent Big-Tech-CEO-Hearanguing before Congress, suggested amending Section 230 as follows:

  1. “Transparency” – each company enjoying Section 230 immunity would be required to issue periodic reports detailing how it dealt with certain types of “objectionable” content. 
  2. “Accountability” –platforms enjoying immunity could also be held to some minimum level of “effectiveness” with respect to dealing with that “objectionable” content. (Recall he also bragged about how effective Facebook’s “hate speech” algorithms are.)
Advertisement in December 8, 2020 New York Times

Perhaps you think “transparency” at least, is good. But imagine what information ends up being collected and retained as “ordinary business records” when complying with this sort of law, and read on.

In the last week-plus, we’ve seen a chorus of people blaming Parler, specifically, for threats or incitement in user-generated content. According to these voices, Parler’s not dealing with this content adequately was responsible for the inexcusable actions of a number of individuals on January 6. Setting aside issues of free will, consider the fuller factual picture that has since been revealed: Parler’s competitors’ platforms were also filled with this content, and some blame Facebook for playing a much larger role in facilitating the planning that led up to the 6th

Yes, that’s a Salon article. What does Salon hope to gain by blaming Facebook and showing sympathy to Parler? I argue that placing responsibility for user-generated content on platforms plays right into the totalitarians’ hands.

With all the platforms now being blamed for user-generated content containing threats or incitement, the new Congress needs only to accept Mark Zuckerberg’s engraved invitation to amend Section 230 along the above lines. But, as we’ve learned in the last week, no system of guidelines enforcement is perfect. If Facebook, with all its algorithms and other resources could not “adequately” deal with this content, then what company could? 

If it’s not actually possible to be good at this, to the standard that everyone seems to expectand Zuckerberg is calling for all of us to be regulated according to that standard, then what exactly is he calling for (whether he realizes it or not)? For government to take over, to have arbitrary control. For all online platforms to operate only by permission of government, according to whatever standards politicians (or the Twitter mobs pulling their strings) deem fit—and this will be true with respect to both free speech and privacy. 

As for free speech, not only has Zuckerberg invited “hate speech” regulation, I’ve also learned this week that the leading third-party AI solutions seem to be much better at detecting “hate speech” than they are at detecting threats or incitement. Perhaps this is because many platforms have elected to moderate “hate speech” more broadly? That is not, as many of you know, Parler’s approach. This is because the term “hate speech” is vague, and is generally held to encompass much speech that is protected by our First Amendment. We have all had a challenge, in the last couple weeks, determining what language, in which context, is “incitement” (“I love you”?) Imagine how subjective things will get when “hate speech” moderation becomes mandatory.

What has Facebook hoped to accomplish by encouraging this? I can only speculate that the company is trying to preserve both their data-mining practices and their rumored engagement-enhancing algorithms, upon which their monetization depends—and to hopefully keep it all under-the-hood, immune from discovery, via Section 230.

As for privacy, some say private companies don’t conduct “surveillance” when they enforce terms of service. Now I’ll remind you of the work I was doing before I joined Parler: promoting and deploying a novel solution to the problem of the “third-party doctrine.” The doctrine says that information a person shares with a “third-party”—such as a social media platform—is not protected by the Fourth Amendment, and therefore that government can obtain such information without a warrant.* 

Now we can predict what’s to come: Section 230 will be amended as Zuckerberg suggests. Given the current capabilities of AI this will likely mean that all platforms will be required to scan ubiquitously for “hate speech,” “misinformation,” or who knows what else. The results of these scans will become ordinary business records of the platforms, obtainable by government without a warrant. No probable cause, no particularized suspicion—perhaps nothing more than a “consent order” could result in routine access to these recordsMinority Report, anyone?

A few months ago, a colleague wondered whether we should engage in more activism. I said that just offering our product is plenty! At Parler, a crucial part of our mission is to collect only the bare minimum of user data, rejecting the business model of Big Tech as we know it today (with a few possible exceptions). In addition, given the total context—more of which I’ve come to understand only this week—we have been outspoken in calling for the repeal of Section 230. I hope more people will understand why we are seen as a threat to this industry—before it’s too late.

*This has been true only since the 1970’s, when the Supreme Court unjustifiably and without explanation transported the third-party doctrine from the context of information sharing in the course of criminal activity, to the context of information sharing within an ordinary business context.

7 Comments

Filed under Uncategorized

Open Letter to Tim Cook

Apple products are themselves evidence for the proper model of legal protection for privacy. Why undermine that now by supporting federal regulation based on the wrong model?

Dear Mr. Cook,

I’ve been an Apple user since your company released the beautiful, floating-screen iMac in 2002. Epinions.com no longer exists, but soon after I got my iMac I wrote an effusive review there called “Fashion, Function and Fun–All in One.” Since then I’ve used nothing but Mac computers, as well as iPads, iPhones, and an Apple Watch. Like so many Apple users, I love the elegant, intuitive design and ease of use of your products.

Although my academic research has been focused on the “right” to privacy, I never paid too close attention to the privacy features of Apple products. Yes, I was relieved to learn that Apple computers are less susceptible to viruses than other brands. And I loved hearing about how seriously Steve Jobs took the responsibility of Apple customers entrusting their private information to Apple. But it was only in recent years that I learned more about what, concretely, Apple does to protect customer privacy–by creating the tools necessary for us to safeguard and control our private information.

First, I cheered Apple’s refusal to write software that would unlock a user’s encrypted phone. Phone encryption puts law enforcement back in the position they’ve been in traditionally: having to present a warrant to the actual user/data subject/suspect, instead of presenting a warrant, or perhaps merely a subpoena, to a “third party.” Given the amount of private information contained in our phones–the Supreme Court has recognized that it’s often even more private, more comprehensive, than what might be found by searching our homes–law enforcement should have to present a search warrant to the subject of investigation directly. Apple’s default encryption features help ensure this.

More recently, I was excited to learn about numerous other features of Apple hardware and software that allow us to withhold personal information about ourselves, our devices, and our use of them, from companies whose web sites we visit, or whose apps we use. I was so impressed with Apple’s efforts to continually improve these and make them more robust, that I featured this Fast Company article about them as a “good news” stories in a “News Sandwich”:

Most people reading this letter–you included–will probably now expect me to add Apple’s support this week for “comprehensive federal privacy legislation” as another reason to applaud your company’s efforts to protect our privacy interests. But the opposite is true. I believe that, in supporting federal privacy regulation, you are undermining the progress you’ve made putting control over privacy into the hands of us, your customers.

Why? Because by supporting the enactment of privacy regulation–particularly regulation based on the idea that “privacy is a fundamental human right”–you are helping to further entrench an entire legal framework that undermines our ability to actually protect privacy. This seems ironic, perhaps, but consider the evidence for the proper model for legal protection of privacy that your company itself has provided. Your products have become a powerful demonstration of the real foundation of privacy: property and contract. You allow us to buy a product, which we can then use to create a state of privacy for ourselves, to protect our private information, and to control which information we share with other companies. Your products put the control over privacy where it should be: with the individual, via our exercise of our property and contract rights.

Unfortunately, by supporting federal legislation based on a distinct right to privacy, you risk ripping control over privacy out of our hands, and putting it in the hands of government. This means, per the current legal framework, we’ll be even more at the mercy of whatever some judge, legislator or bureaucrat deems our “reasonable expectations of privacy” to be. (Read more here as to why upholding a distinct “right” to privacy is not just “theoretically” wrong, but in fact destroys our ability to properly protect states of privacy.) What’s worse is that Apple is doing this at precisely the time when the Supreme Court may be prepared to recognize the proper basis for the legal protection of privacy. (See the dissents by Justices Thomas and Gorsuch in the Court’s recent Carpenter ruling, discussing a property basis for requiring a warrant to obtain cell phone location data from a service provider.*)

You’ve done so much–perhaps more than any other tech company–to give us de facto privacy. Please don’t compromise that by helping to make de jure privacy all but impossible.

Sincerely,

Amy Peikoff

*Incidentally, the full solution to the problem of the third-party doctrine, presented in Carpenter, is outlined in this article.

2 Comments

Filed under Uncategorized

“Will The Supreme Court Legalize Privacy?” TODAY at 3 p.m. ET (12 p.m. PT)

This morning the Supreme Court heard Carpenter v. United States, a case concerning legal protection for the privacy of location data collected by cell phone service providers. Will the Court reinstate Fourth Amendment privacy protection for our data? This and more on today’s show. See Program Notes, below, for the stories, etc., I plan to discuss.

Join in live, either by phone or in the chat room at BlogTalk Radio!

The show can be accessed here.

To access the show’s page at BlogTalk Radio, which will allow you to check out a past episode or to subscribe via iTunes and other services, use this link.

To access the iTunes store page for “Don’t Let It Go…Unheard,” where you can find past episodes, subscribe, and leave ratings and reviews (pretty please!), use this link.

This show is fueled, in part, by Bulletproof Coffee. And now you can help support it by fueling up with some Bulletproof Coffee yourself! Grab some Brain Octane Oil, which, combined with grass-fed butter and blended in your coffee, will help you start your day with sustained energy and focus.

Finally, if you would like to support the show directly, please donate using your Pay Pal account or Credit Card here.

Program Notes

The Supreme Court’s Opportunity to Legalize Privacy

Carpenter v. United States: The Court’s Opportunity to Legalize Privacy

United States v. Miller

Smith v. Maryland

Carpenter v United States at SCOTUSblog

Cops, Cellphones and Privacy at the Supreme Court

Leave a comment

Filed under Don't Let It Go...Unheard