How Should the Law Handle Privacy and Data Security Harms?

In three earlier posts, I’ve been exploring the nature of privacy and data security harms.

In the first post, Privacy and Data Security Violations: What’s The Harm?, I explored how the law often fails to recognize harm for privacy violations and data breaches.

In the second post, Why the Law Often Doesn’t Recognize Privacy and Data Security Harms, I examined why the law has struggled in recognizing harm for privacy violations and data breaches.

In particular, I pointed out the “collective harm problem” — that data harms are often caused by the combination of many actions by different actors over a long period of time, which makes it hard to pin the harm to a single wrongdoer.

I also discussed the “multiplier problem” – that companies have data on so many people these days that an incident can affect millions of people yet cause each one only a small amount of harm. Adding it all up, however, could lead to catastrophic damages for a company.

In the third post, Do Privacy Violations and Data Breaches Cause Harm?, I examined why the future risk of harm, often ignored by courts, really is harmful. I also pointed out that privacy violations and data breaches often cause harm not just to individuals, but also to society.

In this post, I will discuss how the law should handle privacy and security harms.

Statutory Damages

One potential solution is for the law to have statutory damages – a set minimum amount of damages for privacy/security violations. A few privacy statutes have them, such as the Electronic Communications Privacy Act (ECPA).

The nice thing about statutory damage provisions is that they obviate the need to prove harm. Victims can often prove additional harm above the fixed amount, but if they can’t, they can still get the fixed amount.

Are statutory damages the answer? Yes and no. In cases where we want the law to recognize harm and where it can be very difficult to prove harm, then statutory damages do the trick. But there are many circumstances, as I discuss below, when I’m not sure we would be better off if the law compensated for harm.

Should the Law Start Compensating for Data Harms?

One answer is to push the law to start compensating for data harms. On the pro side, I believe that there really are harms caused by privacy violations and data breaches. But would things be better if the law always compensated for harm? Not necessarily. There are at least two reasons why not.

Our Clunky and Costly Legal System

In many cases, the harm to each individual might be small. It would not be worth that person’s time to sue. Nor would it be worth the time and expense to have the legal system involved in millions of cases involving small harms.

There is a way our legal system gets around these difficulties – class actions. But class actions also have their pathologies. The members of the class in data harm cases hardly get anything; the lawyers make out like bandits.

Class actions do serve an important function, though. They serve as a kind of private enforcement mechanism. Damages in class actions can act as the functional equivalent of a fine that deters violations. But many cases just settle because the cost of litigating them is too high. In an ideal system, cases should settle based on their merits not based on the torturous expenses of the legal system.

The Multiplier Problem and the Collective Action Problem

The multiplier problem would not be addressed by the law recognizing harm.

When an organization causes a small amount of harm to many people, do we want to devastate that company in damages?

Causing a few people a lot of harm is generally worse than causing a lot of people a little harm. Generally, society will frown more on stabbing one person to death with a sword than poking 100 people in the arm with an acupuncture pin.

SCENARIO 1: Suppose X Corp says to you: “We have this really cool service, but there is a risk that we will cause $1 of harm to you. Do you want the service?” You say: “Sure, I’ll accept that risk because the service seems cool and the risk of harm is low.” One billion other people have the same answer. If X Corp has an incident and causes $1 of harm to one billion people, we might not want X Corp to be bankrupted by $1 billion in damages.

SCENARIO 2: Now suppose X Corp. came to you and said: “There is a risk that we will cause you $10,000 worth of harm.” You say: “Hey, wait a moment, that’s quite a lot.” Even if only you might be harmed for $10,000, we generally might have a problem throwing you under the bus for the collective good – even if X Corp’s service benefits everyone else.

But now imagine 10,000 X Corps each come to you with the deal in Scenario 1 – all together. That’s a potential $10,000 in harm and it makes the whole deal seem much less attractive. More like Scenario 2.

That’s the difficulty. So I don’t think the solution is as simple as the law just recognizing harm.

Moving Beyond Harm

Although privacy/security violations cause harm, the legal system should move beyond its fixation with harm. There are many circumstances where it is preferable to society for people or entities to comply with the law even if there is no harm. Harm is still relevant because the laws are passed to address problems that can cause harm, but the laws are designed to deter the conduct regardless of whether it does or doesn’t cause harm in any particular case.

For example, suppose you drive through a red light in the middle of the night with nobody else around. You get caught on a traffic camera and fined. There is no harm to others. Should the law be changed to fine you only if you caused harm? Imagine if that were the law. You would then run red lights whenever in your discretion you felt there was not a risk of your causing harm. You might trust your own judgment here, but do you really trust everyone else’s?

The reason for enforcing the law here is to deter, and for this purpose, harm really isn’t important in any one individual case. There is general harm from running red lights in a lot of cases, and that’s why the law forbids it. The law focuses on harm by looking at the big picture, at the collective cases, not each particular case.

Governmental Agency Enforcement

Maybe governmental agency enforcement is the answer. For example, the FTC has been bringing actions against companies that have privacy incidents and data security violations under its authority to regulate “unfair or deceptive acts or practices.” The FTC has brought cases for more than 15 years, and it is has a broader view of harm. It is not tethered simply to monetary or physical harm. (For more background about FTC enforcement, see Daniel J. Solove & Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114 Columbia Law Review 583 (2014)).

Agencies can get around the multiplier problem because they are not tethered to the traditional harm model that forces a particular amount for each person affected. Agencies can fine companies an appropriate fine by taking into account all the circumstances (but the FTC, unfortunately, is limited in its ability to fine).

The FTC can address data security issues earlier on, even before they cause harm. In a few cases, the FTC brought actions against companies for inadequate security even though the companies had not yet had a data breach.

However, we shouldn’t rely solely on agencies, as there are problems of agency capture plus the various efforts by presidential administrations to undermine agencies they don’t like. When agencies don’t stand up for people, people need a way to stand up for themselves, and one of the great virtues of our legal system is that it often provides individuals with a means to seek redress on their own. I think we need a mechanism that allows for individuals not to be solely at the mercy of agencies to protect them.

Back to Basics: Focusing On Goals

The best way to approach the issue is to go back to the basics. Let’s focus on our goals. And for these, I think I can set forth goals that will have a wide amount of consensus:

(1) We want a system that permits a robust use of personal data when it provides social benefits.

(2) We want robust protections for personal data.

(3) We want widespread compliance with these protections and strong deterrence of violations.

(4) We want compensation for individuals who are harmed in a significant manner.

I will focus on #3 and #4 below.

The Need for Compliance and Deterrence

The law needs to create an incentive to comply. Too often, we rely merely on good will and kindness to motivate compliance, but experience shows that this doesn’t work. Only by creating the right incentives will the law make companies behave appropriately.

The law should focus primarily on deterrence. The ideal penalty, in my view, is one that will make the company worse off for the violation. Too often, agency penalties – including the FTC and HHS – do not even cover a fraction of what was gained by a violation.

Moreover, there must be a reasonable likelihood of getting caught. The FTC and HHS don’t bring a lot of actions, so many entities – especially smaller ones – will very rarely be targeted. Occasionally one is, but the odds are greater being hit by lightning.

Ultimately, penalties should be designed to create adequate incentives to deter violations.

We also need a mechanism for individuals to be protected when agencies fall into periods of derelict enforcement or are weakened by a presidential administration that is antagonistic to the agency’s mission.

One possible solution is for people to be able to sue only if a court determines that no regulatory agency has taken adequate actions. The court would first review how the agency handled the matter. If the agency didn’t handle it adequately, then a case could proceed in court.

Compensation

We still would need a compensation system for individuals who are harmed in a significant way. Perhaps this could be established through a fund that comes out of the monetary penalties agencies exact from non-complying entities.

Or maybe we should require companies that collect data to pay into a general fund which would be administered by the government to compensate people (something like worker’s compensation). The payment would be like an insurance premium, which could be higher or lower based on whether a company followed industry standards, how much data was held, how sensitive, and whether a company had a breach in the past.

Conclusion

The above proposals are just half-baked ideas at this point. The important thing, though, is that we clearly identify our goals and recognize what we want the legal system to do. We must not lose sight of these goals in debates about harm. The goals are what will guide us and help us avoid all the confusion and problems caused by the struggle over conceptualizing data harm.

I hope that this series of posts is a helpful first step in the process of bringing more light than heat into the debate about privacy and data security harms.

Previous Posts In This Series

Post 1: Privacy and Data Security Violations: What’s The Harm?

Post 2: Why the Law Often Doesn’t Recognize Privacy and Data Security Harms

Post 3: Do Privacy Violations and Data Breaches Cause Harm?

+++++

The author thanks SafeGov for its support.

Posted in: Features