In 1655, King Louis XIV of France is said to have told the French parliament, “L’Etat, c’est moi.” (“I am the state.”) An alternative English translation of what the Sun King really meant, however, was something more like “It is legal because I wish it.” Europe’s longest-serving major monarch believed in the divine right of kings, and subsequently his right – perhaps duty – to be the final, and sometimes only, arbiter of what was good for France. If Le Roi said it, then it was in the public interest, and it was so.
Today, the pendulum has swung opposite. The extreme democratization of information offered by the internet now challenges the very concept of regulatory control over what content is seen and what is not. The European Court of Justice (ECJ) has recently taken this process to an extreme, granting to each of 500+ million European individuals wide-ranging control over what information about them can be found online. In doing so, the so-called “right to be forgotten” which the Court declared has put much of the European public’s right to knowledge at the mercy of individual whim.
The “right to be forgotten” is the worst kind of law visited upon the most vulnerable kind of technology. The ruling itself is a knee-jerk reaction to a public moral panic over what is euphemistically called “privacy,” but is understood to mean… pretty much whatever any individual wants it to mean. (In other words, “it is legal because I wish it.”) The technology, in this case Google’s search index, is a tool that the modern world cannot function without, but which the vast majority of users does not understand at all. These two worlds have now collided with the most unfortunate result.
I’m going to touch on what the ECJ’s ruling means for marketers, but also for the web as a whole. The “right to be forgotten” is an impractical, and probably impossible, artifact of a pre-digital age that will hurt Europe as it transitions, along with the rest of the world, into what I call (in the masthead of this blog) our shared digital future. Here’s why.
It’s probably good to be clear about what actually happens before your page winds up on a Google search result.
- Step #1: A publisher posts a web page
- Step #2: Google’s crawlers find the content and index it
- Step #3: Google’s algorithms try to determine the meaning and value of that piece of content
- Step #4: Based on Google’s historical understanding of search query intent and performance, they try to pair user queries with the most relevant content
- Step #5: Should a publisher wish to avoid being indexed by Google, they can easily insert a small noindex meta tag on a page-by-page or site-wide basis. Similar techniques are available for any other search engine.
I strongly suspect that we lost most regulators (both in Europe, the U.S. and elsewhere) at Step #2. As a result, instead of recognizing it as an organizer of information, the ECJ instead chose to classify Google as a “data controller” rather than a “data processor,” thus triggering a much higher level of scrutiny. Google’s sheer popularity as the overwhelming market leader in search also puzzlingly contributed to this rationale.
The consequences, of course, are far-reaching – and not just for Google. I would bet you 100 Euros that there are crisis meetings happening right now at Facebook’s European office in Dublin about how the social giant should prepare for a similar legal challenge. Ditto for Twitter. Bing. Dropbox. Any photo-sharing service. Essentially, any service that allows individuals to share content about themselves and others has been put on notice that, in Europe, those individuals will now have the right to demand changes to or the removal of information for the most opaque and subjective of reasons. Today, European citizens can demand that Google remove links to information that is, in that person’s opinion:
- “inadequate, irrelevant or excessive in relation to the purposes of the processing”;
- “not kept up to date”
- “kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes” (Paragraph 92 of the judgment)
The head swims. Who judges the “relevancy” of information about a person? And for how long? What kind of information, and how much, is “excessive?” How are we possibly to know what information is necessary to keep for historical or scientific purposes? And this doesn’t even begin to contemplate the potential applications of this reasoning to other sectors of the consumer web.
As marketers, the potential consequences of the “right to be forgotten” are extraordinary, especially when seen in light of the fast-approaching Data Protection Directive. Obviously, all companies that collect non-anonymous information about European users should today be preparing for a directive that they scrub all information about individuals from their data warehouses upon request. But could those requests extend even further? How about to second-order uses of customer data – for example, a predictive model developed for serving ads to users based on prior behavior? Could an individual’s information somehow be unwound from such a model? If a new anonymous visitor was successfully targeted with such an ad, could s/he successfully complain? What kinds of customer information retention could someone find “excessive?” And most troublesome: Is it really all up to an individual’s whim?
What is relevant?
The democratization of information that the internet represents has been an almost unanimous boon to humanity. That said, the the end result is usually a noisy cacophony of views and information – some good, some bad. Today, this enormous and growing ocean of information is packaged and made available for simple, free public consumption by for-profit search engines, most famously Google. In this area, Google’s business interests also happen to align with an essential public interest. Though this may not be the case with all of Google’s endeavors, a portal to finding relevant information across a nearly limitless number of sources that would be impossible to individually scan seems like an overwhelmingly positive contribution.
This question of relevancy is a key fulcrum of the question of individual privacy. Who determines what is “relevant?” Clearly, Google does not. Google’s algorithm determines relevancy not in a moral sense, but in a technical one: what content provides users of which queries with what they’re really looking for? If “what I’m really looking for” is objectionable to the subject of that information, the relevancy of that information is decided by the interrogator – not Google. (Because, you know, Google doesn’t actually control any of the content.)
Because historical uses of information was brought up in the ECJ’s ruling, it’s interesting to do a thought experiment about what our view of the historical record would look like if individuals throughout history had had the ability to consciously craft the discoverable information around them. We would probably not have access today to the substantial evidence that Abraham Lincoln was gay, for example; or that Dr. Martin Luther King Jr. plagiarized large parts of his PhD dissertation; or that President Kennedy was a serial philanderer. The “right to be forgotten” would essentially wipe clean the record for all but a few extremely high-profile individuals throughout history, and reduce the public’s right to know to the lowest common denominator – the judgment of each individual.
The reality is that people will often disagree about what information is “relevant” based on their points of view. For instance, I find the details of the font on President Obama’s birth certificate utterly irrelevant, but there are many who apparently disagree. Any number of private citizens have details about their personal or professional lives that they might want stricken from the searchable record on questionable grounds. Indeed, many such citizens have begun to make exactly those requests.
Laws against libel and slander were developed long ago to protect individuals from deliberate, damaging lies in an era of information scarcity, and have traditionally been used as boundary markers for free speech in the West. The “right to be forgotten” now threatens to radically expand this category of out-of-bounds speech by proscribing discovery of that which is not just patently untrue and slanderous, but merely embarrassing or “irrelevant” (in the subject’s judgment). And it does so not by removing the speech itself (i.e. the published personal information), but by hiding it in the ever-increasing mass of electronic detritus by making it un-searchable. This does not truly make the information “forgotten,” of course – it only substantially increases the level of effort required to find it. Perhaps the ECJ will next dictate a requisite number of hours a researcher must devote to primary source discovery before it is legally permissable.
Tilting at windmills
After confronting a technology that I doubt many on the Court deeply understood, I fear that the ECJ has essentially allowed itself to be caught up in the current ill-defined moral panic sweeping the internet about “privacy” – whatever we mean by that word today. The day has gone to those who believe that we can turn back the wheel and install a regulatory governor on the rate of cultural change driven by technological advancement.
Of course, to those of us who understand that all modern lives are now inextricably connected to the online world, the notion that you could simply erase information from the record seems absurd. Data does not disappear anymore. Embarrassing information about a European citizen will still show up on a search result in the freest-speech regime available worldwide.
In an affected jurisdiction, Google could replicate its approach to searches for copyrighted material, and simply add a disclaimer at the bottom of a results page about how many links to a person’s name were removed and why. (Would that be better? Worse? Does it depend on a person’s feelings?)
Moreover, a far more straightforward solution to removing embarrassing content would be to simply require content owners – that is, primary publishers of information – to be responsible for the privacy consequences of what they publish. They could use a search engine meta tag to block indexing, or to flag certain pages as carrying potentially sensitive information. Throwing the entire burden of privacy compliance on to Google’s back is not only bizarre, but demonstrates a telling indifference to the technology at stake. I suspect that this is not the last of such indifference, which will collectively help keep Europe a distant second, if not third, to the United States and Asia in the realm of consumer web innovation until this jurisprudence is revisited.
Let’s be clear: in the future, everyone will have an online identity, whether you want one or not. Some information will be good, and some probably bad. But there will be no individual choice in the matter – the digitization of our lives will be a fact wrought by irreversible economic and technological trends that transcend national or legal borders. Just as my great-grandfather Reeves reputedly disliked the advancement of cars, and my grandmother was very wary about the consequences of sudden desegregation in the American south, and my Baby Boomer father is deeply skeptical about globalization today, disapproval of change will not stop it from coming. In the same way as industrialization, desegregation and globalization, inevitable changes to how we mediate our lives with the advancement of the internet and information technology will fundamentally transform our understanding of what “privacy” means.
I always support citizens proactively determining the values they choose to model their society after – which is precisely what the European privacy activist community sees this ruling as doing. Unfortunately, instead of crafting a forward-looking vision for its digital citizenry, the ECJ seems to rather have taken a bold leap into the internet of 1998. We will see how this issue plays out, but I predict that this is the beginning, not the end, of a difficult puzzle of how traditional European concepts of privacy will reconcile – or not – with a digitally connected world.
Edit: It was brought to my attention that Loi is actually feminine, not masculine. I’ve corrected the error. Je m’excuse.