La Loi, c’est moi – Europe and the right to be forgotten

In 1655, King Louis XIV of France is said to have told the French parliament, “L’Etat, c’est moi.” (“I am the state.”) An alternative English translation of what the Sun King really meant, however, was something more like “It is legal because I wish it.” Europe’s longest-serving major monarch believed in the divine right of kings, and subsequently his right – perhaps duty – to be the final, and sometimes only, arbiter of what was good for France. If Le Roi said it, then it was in the public interest, and it was so.

Today, the pendulum has swung opposite. The extreme democratization of information offered by the internet now challenges the very concept of regulatory control over what content is seen and what is not. The European Court of Justice (ECJ) has recently taken this process to an extreme, granting to each of 500+ million European individuals wide-ranging control over what information about them can be found online. In doing so, the so-called “right to be forgotten” which the Court declared has put much of the European public’s right to knowledge at the mercy of individual whim.


The “right to be forgotten” is the worst kind of law visited upon the most vulnerable kind of technology. The ruling itself is a knee-jerk reaction to a public moral panic over what is euphemistically called “privacy,” but is understood to mean… pretty much whatever any individual wants it to mean. (In other words, “it is legal because I wish it.”) The technology, in this case Google’s search index, is a tool that the modern world cannot function without, but which the vast majority of users does not understand at all. These two worlds have now collided with the most unfortunate result.

I’m going to touch on what the ECJ’s ruling means for marketers, but also for the web as a whole. The “right to be forgotten” is an impractical, and probably impossible, artifact of a pre-digital age that will hurt Europe as it transitions, along with the rest of the world, into what I call (in the masthead of this blog) our shared digital future. Here’s why.

It’s probably good to be clear about what actually happens before your page winds up on a Google search result.

    • Step #1: A publisher posts a web page
    • Step #2: Google’s crawlers find the content and index it
    • Step #3: Google’s algorithms try to determine the meaning and value of that piece of content
    • Step #4: Based on Google’s historical understanding of search query intent and performance, they try to pair user queries with the most relevant content
    • Step #5: Should a publisher wish to avoid being indexed by Google, they can easily insert a small noindex meta tag on a page-by-page or site-wide basis. Similar techniques are available for any other search engine.

I strongly suspect that we lost most regulators (both in Europe, the U.S. and elsewhere) at Step #2. As a result, instead of recognizing it as an organizer of information, the ECJ instead chose to classify Google as a “data controller” rather than a “data processor,” thus triggering a much higher level of scrutiny. Google’s sheer popularity as the overwhelming market leader in search also puzzlingly contributed to this rationale.

The consequences, of course, are far-reaching – and not just for Google. I would bet you 100 Euros that there are crisis meetings happening right now at Facebook’s European office in Dublin about how the social giant should prepare for a similar legal challenge. Ditto for Twitter. Bing. Dropbox. Any photo-sharing service. Essentially, any service that allows individuals to share content about themselves and others has been put on notice that, in Europe, those individuals will now have the right to demand changes to or the removal of information for the most opaque and subjective of reasons. Today, European citizens can demand that Google remove links to information that is, in that person’s opinion:

    • “inadequate, irrelevant or excessive in relation to the purposes of the processing”;
    • “not kept up to date”
    • “kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes” (Paragraph 92 of the judgment)

The head swims. Who judges the “relevancy” of information about a person? And for how long? What kind of information, and how much, is “excessive?” How are we possibly to know what information is necessary to keep for historical or scientific purposes? And this doesn’t even begin to contemplate the potential applications of this reasoning to other sectors of the consumer web.

As marketers, the potential consequences of the “right to be forgotten” are extraordinary, especially when seen in light of the fast-approaching Data Protection Directive. Obviously, all companies that collect non-anonymous information about European users should today be preparing for a directive that they scrub all information about individuals from their data warehouses upon request. But could those requests extend even further? How about to second-order uses of customer data – for example, a predictive model developed for serving ads to users based on prior behavior? Could an individual’s information somehow be unwound from such a model? If a new anonymous visitor was successfully targeted with such an ad, could s/he successfully complain? What kinds of customer information retention could someone find “excessive?” And most troublesome: Is it really all up to an individual’s whim?

Like this post? Consider signing up for my weekly newsletter: letter 

What is relevant?

The democratization of information that the internet represents has been an almost unanimous boon to humanity. That said, the the end result is usually a noisy cacophony of views and information – some good, some bad. Today, this enormous and growing ocean of information is packaged and made available for simple, free public consumption by for-profit search engines, most famously Google. In this area, Google’s business interests also happen to align with an essential public interest. Though this may not be the case with all of Google’s endeavors, a portal to finding relevant information across a nearly limitless number of sources that would be impossible to individually scan seems like an overwhelmingly positive contribution.

This question of relevancy is a key fulcrum of the question of individual privacy. Who determines what is “relevant?” Clearly, Google does not. Google’s algorithm determines relevancy not in a moral sense, but in a technical one: what content provides users of which queries with what they’re really looking for? If “what I’m really looking for” is objectionable to the subject of that information, the relevancy of that information is decided by the interrogator – not Google. (Because, you know, Google doesn’t actually control any of the content.)

Because historical uses of information was brought up in the ECJ’s ruling, it’s interesting to do a thought experiment about what our view of the historical record would look like if individuals throughout history had had the ability to consciously craft the discoverable information around them. We would probably not have access today to the substantial evidence that Abraham Lincoln was gay, for example; or that Dr. Martin Luther King Jr. plagiarized large parts of his PhD dissertation; or that President Kennedy was a serial philanderer. The “right to be forgotten” would essentially wipe clean the record for all but a few extremely high-profile individuals throughout history, and reduce the public’s right to know to the lowest common denominator – the judgment of each individual.

The reality is that people will often disagree about what information is “relevant” based on their points of view. For instance, I find the details of the font on President Obama’s birth certificate utterly irrelevant, but there are many who apparently disagree. Any number of private citizens have details about their personal or professional lives that they might want stricken from the searchable record on questionable grounds. Indeed, many such citizens have begun to make exactly those requests.

Laws against libel and slander were developed long ago to protect individuals from deliberate, damaging lies in an era of information scarcity, and have traditionally been used as boundary markers for free speech in the West. The “right to be forgotten” now threatens to radically expand this category of out-of-bounds speech by proscribing discovery of that which is not just patently untrue and slanderous, but merely embarrassing or “irrelevant” (in the subject’s judgment). And it does so not by removing the speech itself (i.e. the published personal information), but by hiding it in the ever-increasing mass of electronic detritus by making it un-searchable. This does not truly make the information “forgotten,” of course – it only substantially increases the level of effort required to find it. Perhaps the ECJ will next dictate a requisite number of hours a researcher must devote to primary source discovery before it is legally permissable.

Tilting at windmills


After confronting a technology that I doubt many on the Court deeply understood, I fear that the ECJ has essentially allowed itself to be caught up in the current ill-defined moral panic sweeping the internet about “privacy” – whatever we mean by that word today. The day has gone to those who believe that we can turn back the wheel and install a regulatory governor on the rate of cultural change driven by technological advancement.

Of course, to those of us who understand that all modern lives are now inextricably connected to the online world, the notion that you could simply erase information from the record seems absurd. Data does not disappear anymore. Embarrassing information about a European citizen will still show up on a search result in the freest-speech regime available worldwide.


A better approach?

In an affected jurisdiction, Google could replicate its approach to searches for copyrighted material, and simply add a disclaimer at the bottom of a results page about how many links to a person’s name were removed and why. (Would that be better? Worse? Does it depend on a person’s feelings?)


Moreover, a far more straightforward solution to removing embarrassing content would be to simply require content owners – that is, primary publishers of information – to be responsible for the privacy consequences of what they publish. They could use a search engine meta tag to block indexing, or to flag certain pages as carrying potentially sensitive information. Throwing the entire burden of privacy compliance on to Google’s back is not only bizarre, but demonstrates a telling indifference to the technology at stake. I suspect that this is not the last of such indifference, which will collectively help keep Europe a distant second, if not third, to the United States and Asia in the realm of consumer web innovation until this jurisprudence is revisited.


Let’s be clear: in the future, everyone will have an online identity, whether you want one or not. Some information will be good, and some probably bad. But there will be no individual choice in the matter – the digitization of our lives will be a fact wrought by irreversible economic and technological trends that transcend national or legal borders. Just as my great-grandfather Reeves reputedly disliked the advancement of cars, and my grandmother was very wary about the consequences of sudden desegregation in the American south, and my Baby Boomer father is deeply skeptical about globalization today, disapproval of change will not stop it from coming. In the same way as industrialization, desegregation and globalization, inevitable changes to how we mediate our lives with the advancement of the internet and information technology will fundamentally transform our understanding of what “privacy” means.

I always support citizens proactively determining the values they choose to model their society after – which is precisely what the European privacy activist community sees this ruling as doing. Unfortunately, instead of crafting a forward-looking vision for its digital citizenry, the ECJ seems to rather have taken a bold leap into the internet of 1998. We will see how this issue plays out, but I predict that this is the beginning, not the end, of a difficult puzzle of how traditional European concepts of privacy will reconcile – or not – with a digitally connected world.


Edit: It was brought to my attention that Loi is actually feminine, not masculine. I’ve corrected the error. Je m’excuse.



  • Reply Aurelie Pols |

    La loi, c’est le respect des autres, surtout dans un monde globalisé!

    I beg to disagree on so many points Blair but we do agree on one thing: there’s a lack of understanding of technology by the legal profession, including those who influence law making at a country or even global level like the OECD.
    A client asked me last week if it was a lack of knowledge. I’d prefer to say it’s a lack of communication. Let’s reach out.

    So, just to make things clear. We Europeans do not hold the Freedom of Expression is such high esteem as the US. The reasons for that, as I was reminded yesterday following the recent EU elections (and mainly France’s results) is that in 1933, Hitler got elected with 33% of the votes.
    This is also not new: back in 2006, Yahoo was asked to take down some content. Same exact discussions and US outcries about Freedom of Expression cf.

    The most interesting part imho is the argument that this is going to be hell to police. Are you serious? we are talking about the guys that want to organize the world’s information, right? the ones that just launched a self driving car?
    Venga, I’m sure even Google has the brain power to find a solution to this

    Last but not least, let’s all be very careful and most of all Google. While I wrote about their monopolistic behavior back in 2012 on, it looks like Spiegel in Germany is echoing those fears

    Again, surely all those smart people at Google can figure out how to sift through content or have all those bright PhDs left?

    • Reply Blair Reeves |

      The biggest problems that I see with this ruling go beyond whether or not you think this kind of information policing is good public policy or not. (Perhaps you do, and I don’t, but that’s a judgment call.) More fundamental is: why on earth does this make Google a “data controller?” Why does it not make eminently more sense to ask content publishers to be responsible for the privacy implications of the information they publish? Unless, that is, that their right to free expression outweighs individual rights to privacy. (!!)

      And that’s the crux here – there is inevitably a balancing of interests when you apply “fundamental rights” to reality. We do it with the Bill of Rights all the time. You can’t yell “fire” in a crowded theater, etc. So I think that the EU right to privacy could be accommodated in a much better way.

      But more fundamentally, this ruling does nothing but create bureaucratic hassle. Either certain information is public, or it isn’t. Criminal histories, bankruptcies, credit reports, addresses, photos, news reports – it’s not as if making this stuff less searchable via Google makes it disappear from the internet. That’s preposterous. All this ruling will do is increase the effort level required to find it. (In so doing, it also dramatically raises the costs of doing business in Europe, which advantages giants like Google and MSFT at the expense of smaller players.) So unless the ECJ plans to actually change what information is public record and what is not, I think the underlying reasoning in this ruling is deeply flawed.

  • Reply Frank Lee |

    I think a better question, Aurelle, is would you rather have those bright PhDs creating world changing products like self-driving cars and contact lenses for diabetics and so on, or coming up with effective ways to deal with dozens of separate European bureaucratic morasses of red tape for a result that is, as Blair so eloquently explained, a mix of unpredictable and unhelpful?

  • Reply Richard Beaumont |


    The holes in your arguments are plain. You correctly note that Google determines the value of a piece of content – that value determines whether and where it appears in the index.

    How it determines that value is a secret – its algorithms are intellectual property after all. By organising information in this way – Google is most definitely a data controller – its decisions influence the search results therefore it must take responsibility for them.

    You say that we will all have digital identities – of course we will. But do we not have a right to exert some control over those identities, as we do in the real world?

    Perhaps you have fallen victim to the headlines? Only the media is calling this a Right to be Forgotten – shorthand for a much more complex issue, and far from the reality. The ECJ knows this. The reality is that this is a right to obscurity – rather like have frosted glass on a bathroom window – and how could you begrudge that?

    You also point out that publishers can use tags to stop content being indexed. Yes they can – but that is a blunt tool that stops a whole page being indexed – which is a much greater reduction in content availability that simply a removal of the link between a name and the content.

    However, I do think there is a role for original publishers to play here – to tag up personal data in pages, maybe even put expiry dates in them. But who would have the power to push that onto publishers, get them to take up such a system quickly? The ECJ? Of course not.
    Google has just such market power. It can persuade publishers to change by refining its indexing algorithms to promote those that take on such responsibility, and demote those that don’t.

    This is a complex problem of course it is. However Google has grown to the size it is by proving itself to be very good at solving complex problems. I have no doubt they will solve this one too.

    • Reply Blair Reeves |

      I disagree that organizing information makes Google a data controller, rather than… say, an organizer. Publishers control their content, not the manufacturers of the card catalog.

      The meta tag solution is actually far more flexible, and could be made so, than you imply. Probably a better way for actual data controllers to be held accountable for the privacy implications of what they publish.

      Is there any doubt Google can do something to solve this problem? No. But it’s unfair and impractical to ask them to do so, it makes for bad public policy, and dramatically raises the costs of doing business in Europe (once again).

  • Reply Sofia Koutsouveli (Lucifairy) |

    What the ECJ did was censorship. Just like individuals don’t have privacy rights in public, which means a photographer can take your picture and publish or sell it even if you don’t want to because the photographer’s freedom of speech is superior to the individual’s wish to avoid being photographed, in the same way facts that have been published about yourself shouldn’t be under your control. Once some information has become public knowledge, it’s the public’s right to know it, and attempting to control its dissemination on privacy grounds is censorship. Freedom of expression is superior to individuals’ privacy even if it’s injurious, defaming or otherwise damaging to the individual. As a blogger I can visit your public Facebook profile any time I want and write about your public posts on my blog with screenshots of embarassing pictures you’ve posted publicly.

So, what do you think ?