Solove: Privacy regulation a failure
CAMBRIDGE, MASS. – The current U.S. approach to privacy regulation fails to account for the effects of information sharing created by the ascendance of technologies that permit things such as Big Data or fusion centers, said Daniel Solove, a noted privacy law researcher and a professor at George Washington University. He spoke Nov. 9 during a symposium on privacy and technology held by the Harvard Law Review.
The current model, which Solove dubbed the "privacy self-management approach," takes refuge in the notion of consent, he said.
"The basic concept is tell people that data is being collected, tell people what's going to happen with their data, how it's going to be used, how it might be disclosed, and let people decide whether or not they consent to those particular uses."
That model overlooks problems both cognitive and structural in nature, Solove said.
In the former category is the fact that few people read company privacy policies, a problem that isn't readily solvable since the policies are complicated because privacy is complex. Another cognitive problem is that people make incorrect assumptions about how their privacy is be protected and struggle to make risk-determinations about the privacy in the first place.
"People assess familiar dangers as riskier than unfamiliar ones, and one of the problems with privacy is that the dangers are not as familiar." How questions of privacy risk are framed, as well, can significantly change how people assess risk, Solove added.
When it comes to structural issues, the self-management approach overlooks data aggregation. People may willingly disclose small bits of information about themselves inconsequential at the time – which diet cola they prefer, for example – but when many pieces of data are assembled together, data analytic tools can make "judgments and predictions and reveal other facts about you that you didn't realize you had revealed," Solove said.
"If I'm deciding at a particular point in time about whether to give my data, I have no idea about how, sometime down the future, 5 years later, when I give the 100,000th piece of data, then suddenly – ding! – a new fact pops up because some other piece of data I gave 3 or 4 years ago is combined with other pieces of data."
Another structural problem is that of defining harm suffered as result of privacy violations. "They don't always result in severe emotional distress. A lot of the violations are small." The problem is that just as data points aggregate, so can privacy violations. Social values, too, can be challenged by lack of privacy, but harm to that "isn't necessarily captured by focusing or giving people individual rights when it comes to privacy."
The answer to the question of what a better privacy regulation regime would look like isn't straightforward, however.
For one thing, it's possible that society might have a positive interest in some data being shared even if individuals say it's an invasion of privacy to do so. Additional regulation could also run the risk of stripping choices from individuals who want their data shared, Solove noted.
"Paternalism denies choice, also. It also denies consent. So no matter which way we go – we go with consent, we don't really get consent, and if we go with paternalism, we don't really get consent, either."
A possible way forward, Solove said, is to have regulations focus more on downstream uses of data "rather than trying to have the management at the time that people give up the data."
Asked later what a new law might look like – use-based prohibition on secondary re-use? – Solove said he doesn't yet know.
"All I know is that to have something meaningful, we must turn another way," he added.
- listen to Solove's Nov. 9 talk at the Harvard Law Review symposium
Supreme Court to hear arguments on FISA Amendments Act standing suit
Drug sniffing dogs' status under Fourth Amendment before Supreme Court
Senate subcommittee lambastes fusion centers
Senate Judiciary postpones stored communications privacy overhaul