The Failure of the GDPR

Trust, it is said, is hard to gain and easy to lose. But does trust matter to consumers?

The EU’s General Data Protection Regulation (GDPR) came into effect on 28 May 2018. Its aim was to “respect [people’s] fundamental rights and freedoms, in particular their right to the protection of personal data.” Has it achieved that?

According to surveys, only 21% of people trust brands to safeguard their private data and 40% do not trust companies with their private data at all. Of course, fewer than nine per cent of consumers read the terms and conditions. So, most people have no clue what they are agreeing to when they sign up.

Dark patterns trick people into accepting whatever companies want to track. And even if people read the “We care about your privacy” cookie banners and make clear choices, more than half of such consent pop-ups ignore users’ choices anyway.

The GDPR’s primary achievement thus far appears to be €4b in fines from roughly 2,000 GDPR violations. The same website shows that the trend of fines per month is increasing slightly.

Beyond that, personal data of Europeans is shared 376 times each day for online advertising. For users in the US that figure is roughly twice as high. In that sense, EU citizens’ data is somewhat safer than that of Americans, who have no data protection regulation at the federal level but only in a few states.

Penalties

Let’s look at a selection of salient data breaches and violations of data protection and antitrust regulation of tech companies and data brokers. Maybe an answer to the question of whether trust is important to consumers can be found in the number and extent of such violations.

Meta

Meta and TikTok are among the tech companies with the worst reputations in the US. Meta and its subsidiaries currently hold six of the top-10 largest GDPR fines with a grand total of €2.5b. Not among these is the FTC’s $5b penalty for the Cambridge Analytica scandal.

There have also been countless data breaches, including of unencrypted passwords. Or apps sharing sensitive personal information with Meta for ad targeting, including mental health charities.

TikTok

Among the younger generations, Facebook & Co. are not as popular as TikTok and YouTube. Their track records are not much better though.

Apart from various data breaches, TikTok reached a $92m settlement for harvesting data without user consent, it illegally processed children’s data, lacked age verification tools, spied on reporters, and contributed to teenage suicides.

Google

YouTube promoted child exploitation and reached a $170m settlement over collecting children’s data without parental consent.

YouTube’s parent company, Alphabet, holds three of the top-10 largest GDPR fines for a total of only €200m. Alphabet had its share of data breaches, too. Its largest fine from the EU was the €4.34b for violation of antitrust regulation in using Android to increase its search engine dominance, not unlike Microsoft and Internet Explorer in the 1990s.

Speaking of Microsoft…

Microsoft

There was Microsoft’s recent Azure snafu that exposed data from US and European government agencies due to the misplacement of account keys. Personal data from half a billion LinkedIn users was scraped and sold as well as webmail accounts breached several times over. The FTC invoiced Microsoft $20m for collecting data without parental consent.

Of course, antitrust lawsuits have always haunted Microsoft. The European Commission fined the company $732m over not keeping its word to provide EU-based Windows users with a choice of web browsers, after it had already fined Microsoft $611m for abusing its monopoly status. The US Department of Commerce also asked for $3.3m for violation of export controls and sanctions. Mere pocket change.

Amazon

Amazon’s lacklustre reputation is well earned. It was fined $1.28b for abuse of its market position, $877m for GDPR violations, $25m over hoarding children’s data through Alexa, and $5.8m over Ring spying on users.

Surveillance appears Amazon’s favourite pastime. They have also spied on labour and environmental groups, their delivery drivers, and warehouse workers.

Uber

Uber has been known for its lawlessness, all-round questionable behaviour as well as sexual harassment and abusive behaviour, over which the CEO had to resign. Its CISO was convicted for mishandling a data breach that also cost it $148m in penalties.

Revolut

Among a series of controversies, Revolut’s most egregious was probably the case of an executive threatening a customer. That’s the same company that allowed $20m to be stolen by hackers due to lax security practices.

Dishonourable Mention

Amazon is not alone when it comes to spying on people. Tesla filmed people in their cars. Snapchat employees had access to private snaps. Qualcomm chips share private information without explicit user consent. And iRobot is sitting on a blackmail goldmine.

A few more fines for data breaches include Equifax ($575m), Epic Games ($520m), T-Mobile ($350m), British Airways ($230), Home Depot ($200m), Capital One ($190m), Twitter ($150m), Marriott International ($124m), Morgan Stanley ($120), and Yahoo ($85m).

Data Brokers

Among the companies listed is Equifax, a credit rating agency that really acts as one of many data brokers like Acxiom, Epsilon Data Management, Oracle, Experian, and CoreLogic. These companies stockpile thousands of data points on billions of people.

Since they process any personal data they can lay their hands on with the intent of selling it on to whomever is willing to pay for it for whichever purpose, it is highly dubious:

  • how the requirement for transparency on the purposes of data collection can be met prior to consent;
  • how the purpose of processing personal data can be explicit or guaranteed to be legitimate prior to user consent;
  • how users can even give their informed consent, or
  • how users can exercise their right to erasure when they do not even know who holds their personal data.

Unfortunately, the courts have so far sided with the data brokers. Similarly, Oracle and Saleforce were hit with a $10b class-action lawsuit for processing personal data without explicit consent, which was later declared inadmissible. If consumers have little recourse when their data is sold behind their backs, the GDPR has truly failed in its objective to safeguard people’s personal data in favour of commerce.

The Cost of Doing Business

Large tech companies and data brokers can definitely afford to build the infrastructure needed to process personal data in accordance with applicable laws. So, do large companies treat penalties for data breaches and privacy violations simply as the cost of doing business? Or do consumers just not care about trust?

Maybe the problem is that consumers have no alternatives due to near monopolies (e.g. in automotive or food and beverages): they vote with their wallets, but the money simply goes into another pocket of the same pair of suit trousers. With the largest social networks, network effects can definitely lock in users.

After so many repeated breaches, perhaps users expect to be harvested for their personal data in return for free services. That appears to fly in the face of surveys that indicate that trust appears to be more important to younger people, though their favourite apps have less-than-stellar track records. Of course, the difference between what people say they might do and what they actually do renders most market surveys useless, as any product manager knows.

So, is trust essential to consumers? I honestly don’t know, though I doubt it. Meta with three billion and TikTok with one billion monthly active users prove that reputation is not correlated with success in the market. What remains elusive is a reduction in the rate of data protection violations, which can either mean enforcement is improving, though the same companies pop up time and again, or companies barely care about data protection, insofar as that can be inferred from their actions (or lack thereof), not their words. And if users do not really care, why should companies?

Up Next: AI Act

The EU is about to unleash its AI Act, in which it aims to protect people from foreseeable harm. While that is a noble pursuit, they appear to reuse the GDPR playbook, which may have fostered a European data protection culture, raised awareness of data privacy, and inspired similar regulations around the globe, but more than five years on it still lacks clear and measurable improvements in the protection of personal data.

Thanks to extensive lobbying, the EU even caved in on its stance to prohibit the transfer of data from the EU to the US. In the revised scenario, the US would monitor its own compliance. Such self-regulation is obviously a farce.

It is not merely a matter of enforcement but rather the fact that certain companies flout the rules consistently and prefer to pay the penalties rather than improve data protection. Partly because it’s cheaper. And partly because the regulation is so verbose yet says very little on technical matters, so much is left as an exercise to the reader.

The AI Act is no exception. Its risk-based approach to AI is sensible, though inconsistent, especially when it comes to foundation models. AI for military use is explicitly excluded from the regulation, which undermines the notion that the regulation is designed to protect “health, safety, and fundamental rights.”

Many problems, as in the case of the GDPR, comes down to vagueness in the core definitions. For instance, an AI system encompasses any machine-based system that “generates outputs […] that influence physical or virtual environments.” That may very well include rules-based systems or simple heuristics, which could turn almost any piece of software with a single control flow statement into an AI system in the eyes of the law. For foundation models, what exactly “broad data at scale”, “generality of output”, or a “wide range of distinctive tasks” mean are never made clear.

Unless the AI Act becomes more specific in crucial technical details, it may achieve the opposite of its intentions, just as the GDPR came with plenty of entirely foreseeable yet unintended consequences, such as compliance being a barrier to entry for startups.

That said, the GDPR has been a roaring success for politicians and only a minor nuisance to companies with pockets deep enough to pay off the fines. As it stands, I expect the AI Act to be no different. Sadly.