Administrative Law Blog Essay

Facebook, Cambridge Analytica, and the Regulator’s Dilemma: Clueless or Venal?

The recent revelations of Facebook’s sharing of user data with Cambridge Analytica is just the latest in a long line of privacy missteps by the monolithic social media platform.  The public outcry seems louder today than ever before, but the crucial question is: will the company be more responsive than it was after previous privacy violations?

I spent nearly four years as director of the Federal Trade Commission’s Bureau of Consumer Protection, where I worked on several hundred enforcement cases—including the FTC’s enforcement case against Facebook—and many more investigations.  Part of my job was to listen to lawyers plead their cases on behalf of companies subject to FTC investigations. For the most part, I understood why a company had strayed.  Poor judgment, bad legal advice, or the zeal to make a buck at all costs.   But sometimes I wondered whether the company was simply completely clueless or actually venal.  Clueless companies acknowledge that problematic conduct took place, but claim that they were unaware of it, and for that reason they shouldn’t be punished—or, at most, should get a slap on the wrist.  Venal companies deny all wrongdoing, no matter how egregious the violation, incontrovertible evidence against them, or the toll exacted on consumers.  And like all regulators, I struggled with deciding which was worse—cluelessness or venality?  Do clueless violators deserve stricter oversight because they’re unpredictable?  Or do venal actors need a tighter leash because they are predictably bad?

I didn’t think that Facebook fell into the “venal” category when the FTC first investigated the company eight years ago.  The company seemed to understand that it had pushed too hard to force users to make private data public and was willing (if not happy) to rein in the company’s drive to increase data sharing.

But Facebook’s enabling of the Cambridge Analytica campaign suggests that I may have been wrong.  Facebook is now a serial offender.  And for much of the company’s fourteen-year life span, Facebook has faced justified criticism that it is not candid about the extent to which user data is shared with app developers and other third parties.

The first controversy arose in November 2007 when Facebook launched “Beacon,” a program designed to broadcast actions, like purchases, that a Facebook user took on participating website to all of their Facebook “friends.” Users immediately pushed back, objecting to having their friends learn of their purchases without their permission.  Users also complained that Beacon’s sharing function was set by default—forcing CEO Mark Zuckerberg to give one of his signature apologies (“We’ve made a lot of mistakes . . .”) in a blog post (Wired recently assembled a complete history of Zuckerberg’s apologies).  After efforts to salvage Beacon failed, Facebook abandoned Beacon in 2009 as part of a settlement of a class action lawsuit.

Facebook’s next controversy arose just as it was shutting down Beacon. In November and December of 2009, Facebook made two changes to its privacy policy to designate as “publicly available” certain information that had previously been private and subject to the user’s control. Facebook had long given users privacy settings that restricted access to certain information to specific groups, such as “Only Friends” or “Friends of Friends.”  Although Facebook told users they could continue to restrict sharing of data to limited audiences, that wasn’t so.  In fact, selecting “Friends Only” did not prevent user’s information from being shared with third-party applications their friends used (sound familiar?).  Facebook also represented that the third-party apps users installed would have access only to user information that they needed to operate. Once again, that was not true; the apps could access nearly all of users’ personal data.  That’s why the FTC charged Facebook with eight counts of deceptive acts and practices.

To settle the FTC charges, Facebook entered into a consent decree with the FTC in November 2011.  The decree barred Facebook from making any further deceptive privacy claims, required Facebook get consumers’ approval before changing the way it shares their data, and, most pertinently here, required Facebook to give users clear and conspicuous notice and to obtain “affirmative express consent” before sharing their data with third parties.

Let’s consider the Cambridge Analytica debacle through the lens of the FTC-Facebook consent decree.  According to press accounts, which Facebook has not disputed, in 2014 Facebook permitted a researcher, Aleksandr Kogan, to pay Facebook users to download an app called “thisisyourdigitallife,” ostensibly for scholarly purposes.  Kogan did not disclose his intention to share user information with Cambridge Analytica.  At least 270,000 Facebook users downloaded this app.  The app also collected the information of the test-takers’ Facebook “friends,” allowing the app to acquire data on an estimated 50 million Facebook users.  Does Facebook, or anyone else, really believe that these 50 million users “consented” to the harvesting of their data by Kogan?  I don’t.  For this reason, the non-consensual harvesting of massive amounts of data by a third party app – the very violation of law that led to the first FTC case against Facebook – lies at the center of the Cambridge Analytica investigation.  (Zuckerberg provided yet another blog post apology, which contains a useful, if somewhat self-serving, chronology of the Cambridge Analytica case.)

Facebook’s apparent violations of this provision of the decree is troubling.  The decree makes clear that robust opt-in consent is required before any sharing that exceeds the restrictions imposed by a user’s setting.  The decree draws a bright line between a “user” – defined to mean “an identified individual from whom [Facebook] has obtained information,” and a “third party,” defined to mean “any individual or entity that uses or receives covered information obtained by or on behalf of” Facebook.  The decree thus permits Facebook users to continue to share information, but it forbids third parties from obtaining a user’s private information unless the user is given notice and consents or has enabled privacy settings that allow broad sharing.  The decree requires that, when third-party access exceeds a user’s settings, notice must be given clearly and prominently and that it disclose exactly what user information will be harvested and the identity or category of the third party seeking the data.  Then the user must give “affirmative express consent.”

In my view, these requirements were not met when Kogan deceived 270,000 users into thinking that their information would be used solely for research, and then managed to gain access to 50 million of their friends, who had no clue (and probably still don’t) that their data was harvested as well.  Even aside from the consent decree, this harvesting plainly violated the Federal Trade Commission Act’s prohibition against “deceptive acts or practices.”

Facebook has suggested that, at the time Kogan gained access to the data of 270,000 users, Facebook’s settings allowed third parties to harvest everything from users and their friends, and thus there was no violation of the decree.  In my view, that argument is far-fetched.  The FTC’s investigation will focus on a user’s reasonable expectations – that is, what did users and their friends understand their “privacy” settings to mean?  Were users clearly and unmistakably informed that permitting sharing with friends meant broad and virtually unrestricted access to their data by third parties?  Did the “friends” understand the breadth of third-party access to their data based on decisions that others made?  I seriously doubt that Facebook could make that showing, and indeed, I think it would be self-defeating for Facebook even to try.  That argument would betray any claim that Facebook cared at all about user privacy—as Zuckerberg seemingly admitted by describing such harvesting of data by apps like Kogan’s as an “abuse” of the data-sharing feature, which was supposed to be innocuously “social.”

There seem to be other serious violations as well.  A different section of the decree requires Facebook to assess risks to consumer privacy and take reasonable measures to counteract those risks.  Facebook appears not to have taken this requirement seriously either.  It doesn’t appear that Facebook had even the most basic compliance framework to safeguard access to user data.  Most critically, it is entirely predictable that if app developers are not held to their promises about data collection and sharing, they might not be candid with Facebook about their intentions.  Yet it seems that Facebook made no effort to establish the bona fides of developers, much less verify or audit what user data app developers actually harvested and shared.  As the press reports make clear, there were many reasons for Facebook to be wary of Kogan from the start, but there is no evidence that Facebook engaged in any serious screening of Kogan or other app developers, or any back-end audit or verification of the scope of data collection.

Indeed, Facebook’s recent announcement that it will start auditing the collection and sharing practices of pre-2014 app developers is powerful evidence that, until now, Facebook didn’t bother to do so.  This concern is heightened by the fact that Facebook was apparently oblivious to the abuses by Kogan and Cambridge Analytica until the Guardian and the New York Times reported that they were using data obtained from Facebook.  Even Facebook’s response to these revelations has been ineffectual.  At the time of this writing, Facebook still cannot confirm that the data wrongfully obtained is not still in use and has been destroyed.  And Facebook’s inability or unwillingness to do so suggests yet another disturbing sign of non-compliance with the decree – it seems that Facebook’s contracts with app developers do not contain provisions that adequately safeguard user information and give Facebook effective legal remedies in case of unauthorized collection, sharing or other wrongdoing, including deletion.

All of this leads back to the question whether Facebook is a venal company that warrants especially harsh treatment from regulators.  Facebook now has three strikes against it: Beacon, the privacy modifications it made in 2009 to force private user information public, and now the Kogan/Cambridge Analytica revelation.  Facebook can’t claim to be clueless about how this happened.  The FTC consent decree put Facebook on notice.  All of Facebook’s actions were calculated and deliberate, integral to the company’s business model, and at odds with the company’s claims about privacy and its corporate values.  So many of the signs of venality are present.

This is an acid test for Facebook.  So far we’ve heard contrition of Mark Zuckerberg.  But we’ve heard all of that before, and contrition does not bring about change.  We have also seen Facebook condemn Kogan and Cambridge Analytica for misusing user data, notwithstanding Facebook’s acknowledgement that prior to 2014, its platform permitted Kogan to harvest data not just from those who downloaded his app, but also all of their “friends.”  Condemnation of conduct Facebook itself enabled rings hollow.

On the other hand, Facebook maintains that it has made, and will continue to make, important changes that will guarantee that this kind of mass, non-consensual harvesting of user data can’t happen again.  It did make significant changes to its platform in 2014; but whether those changes are sufficient is open to question.  And Facebook has announced that it is, albeit belatedly, starting to employ some of the oversight and accountability measures that were contemplated in the consent decree.

But vague and unenforceable promises are not enough.  The better approach would be for Facebook to acknowledge that it violated the consent decree and to come to the FTC with specific proposals for serious and enduring reform.

Possible reforms are legion, and should include some of the basic tools for privacy protection.  Facebook must devise systems to ensure that third parties do not have access to user data without safeguards that are effective, easy to use, and verifiable.  When third party access is sought, users must be given clear notice and an opportunity to say yes or no – that is, the gateway must be notice and the affirmative express consent required by the 2011 decree.  Facebook also must develop accountability systems that prove that consumers have in fact consented to each use of their data by Facebook or by third parties.  And Facebook must agree to refrain from using blanket consents; after all, blanket consents are the enemy of informed consent.

Facebook also must take control of third party access to Facebook users.  Facebook must to do more than assume app developers follow the rules.  Facebook must set up systems that audit third party collection and sharing on an ongoing basis; that hold third parties to their promises by engineering controls and contractual lockups; and that give Facebook effective remedies when third parties break the rules – including enforceable rights to audit, retrieve, delete and destroy data improperly acquired or used, and liquidated and actual damages for violations.

Facebook must also be accountable to the public.  There must be far more robust reporting to the FTC, but those reports are non-public.  To re-establish trust with its uses, Facebook should consider appointing a data ombudsperson and establishing a group outside the company that have unfettered access to Facebook data and employees to ensure that Facebook is now, finally, honoring its commitments to users, and this group should periodically report its findings on Facebook’s compliance.

One last comment.  Because of limited statutory authorization and the constraints of the First Amendment, the FTC is unlikely to investigate the most troubling aspects of the Cambridge Analytica matter – namely, the harvesting of user-specific data which was then deployed to shape that user’s political views, all done to influence the election.  There should be little doubt that Facebook user data sharpened Cambridge Analytica’s algorithms, which made the Trump campaign’s micro-targeted messaging more effective.

It is also worth noting that Cambridge Analytica could have obtained similar data from dozens of other sources, including other social media sites and data brokers.  The FTC’s investigation of the data broker industry demonstrates just how extensive data collection has become, and that big data brokers have access to data similar to that users post on Facebook.  So the third-party access-to-data issue is not the only question Facebook must confront, and it may not even be the most important.

That question goes to Facebook’s role in electoral politics.  Facebook appears to have enabled Cambridge Analytica’s advertising work in ways that are opaque to the public. Equally important, Facebook has emerged as an important arena for political discourse, but has not articulated any vision about what role, if any, the company should play in electoral politics.  Will it advertise on behalf of all comers (except, perhaps, non-US entities), or will it limit its advertising to avoid even an appearance of partisanship?  Facebook’s future in American politics is yet another factor that will bear on whether people judge Facebook to be venal or not.