Do Privacy Violations and Data Breaches Cause Harm?

In two earlier posts, I’ve been exploring the nature of privacy and data security harms.

Post 1: Privacy and Data Security Violations: What’s The Harm?

Post 2: Why the Law Often Doesn’t Recognize Privacy and Data Security Harms

In this post, I want to explore two issues that frequently emerge in privacy and data security cases: (a) the future risk of harm; and (b) individual vs. social harm.

Future Risk of Harm

As I discussed in my first post in this series, the law’s conception of harm is to focus on visceral and vested injuries – financial or physical harm that has already occurred. Courts struggle greatly in handling the future risk of harm.

Is a future risk of harm really a harm? I believe that it is. It might be hard to see, but consider the following analogy: We generally don’t perceive air as having mass or weight – but it does, of course. Experiments to prove this to school children typically involve balancing two balloons, one of which is then popped to show the comparison. Let’s look at the harm from a data breach. There may be no visible identity theft or fraud, but let’s try a similar comparison to the balloon experiment. Imagine I own two identical safes. I want to sell them. I list them on eBay:

  1. SAFE FOR SALE
    Made of the thickest iron with the most unbreakable lock
    .
  2. SAFE FOR SALE
    Made of the thickest iron with the most unbreakable lock. However, the combination to the safe was improperly disclosed and others may know it. Unfortunately, the safe’s combination cannot be reset.

Which safe would get the higher price?

Now we can see it! Safe 2 is no longer as good as Safe 1. It has been harmed by the improper disclosure, and its value has been reduced.

If I remove the locks to your doors in your house, but there’s no burglar yet or intruder, is there no harm to you? I think there is — you’re clearly worse off.

Or suppose there’s a new virus. The virus isn’t contagious. It has no side effects. But it makes people more vulnerable to getting a painful disease later on that can take a year or more to recover from. Many people will not get this disease, only some. But those with the virus are at greater risk. Now, imagine I secretly inject you with this virus. Are you harmed?

Now, suppose there’s a remedy – another shot that cures the virus. Would you pay for it?

I provide these analogies to demonstrate that although having one’s risk of future harm increased may not be as easy to see with the naked eye, it does put someone in a worse position. People are made more vulnerable; they are put in a weakened and more precarious position. Their risk level is increased. In the immediate present, this situation is undesirable, anxiety-producing, and frustrating.

And how can there be no harm when so many laws mandate the protection of privacy and data security? If violations don’t create harms, then why have all these laws? Why mandate costly compliance measures? In short, if data violations don’t cause harm, then why spend so much money and time in protecting against them?

Individual vs. Social Harm

The law often is fixated on individual harm, but many privacy and data security issues involve not just harm to individuals, but a larger social harm.

What if a company secretly sends your data over to the NSA, and you never find out about it? Nothing bad ever happens to you. The data just goes into some supercomputer at the NSA, where it is stored secretly forever. Are you harmed? Or is it akin to the proverbial tree that falls in the forest that nobody hears?

The fact that the NSA can gather data in secret, virtually unchecked, and can do so without accountability to the public is a threat to democracy. It is certainly a problem. It is harmful to society and to democracy, but it might be hard to prove that any one individual was harmed.

Is Harm the Right Issue?

So what should be done? In this series of posts, I have shown how the law often fails to recognize privacy/security harms and why it is so difficult for the law to do so. In this post, I have shown that there really are problems caused by privacy and security violations, ones that are harmful, but just in ways that are very difficult to establish in the law’s current framework.

One way to deal with the problem is to push the law to better recognize privacy and data security harms. I think that this could help, though it will be quite challenging. Even if successful, I am unsure whether a recognition of harm would best solve the problems. Class action lawyers would surely benefit, but would it achieve the goals we want to achieve? For me, those goals broadly are (1) a robust use of data; (2) robust protections on that data; (3) widespread compliance with these protections and strong deterrence for violations; and (4) redress when individuals are harmed in a significant manner.

Maybe the best method might be to shift the focus away from harms. But if we do that, what should the focus be on? How should privacy and security violations be dealt with? I will explore this issue in the next installment.

In the meantime, if you haven’t read them already, please check out the first two pieces in this series:

1. Privacy and Data Security Violations: What’s The Harm?

2. Why the Law Often Doesn’t Recognize Privacy and Data Security Harms

+++++

The author thanks SafeGov for its support.

Posted in: Features