What ‘Rights’ Do We Have When..
Reason
What 'Rights' Do We Have When
We're Talking About Our Private Online Data?
Defining terms is tricky, particularly when governments with bad track records on privacy want to call the shots.
(Wrightstudio / Dreamstime.com)
Do you have a "right" to your digital data? It is a popular thought in today's debates over privacy and surveillance.
The European Union's (EU) General Data Protection Regulation (GDPR) attempted to codify specific rights for individuals over their own data, including the rights to be informed, to be forgotten, and to object. Ideas like a "data dividend," where individuals would be monetarily compensated for their data by online platforms (similarly to how the state of Alaska pays residents an oil dividend) are another attempt to create rights to data.
For those who understand the value of property rights and market allocation, the notion that rights to data are underprovided has intuitive appeal. Property rights and market activity efficiently and fortuitously direct economic goods to their most highly valued uses, except in a few special circumstances of market failure. It makes sense that having a system of rights to govern data use would solve many of the problems that so vex us in the digital age.
But if the solution of rights to data appears so straightforward, why has the market struggled so far to produce these rights? A new Competitive Enterprise Institute research paper by Chris Berg and Sinclair Davidson, "Selling Your Data Without Selling Your Soul: Privacy, Property, and the Platform Economy," provides some insight into this question.
The authors discuss some of the economic barriers that have complicated the path to a rights regime for personal data before considering why top-down efforts to erect one like the GDPR or data dividends are ill-suited to that purpose. Rather, they see common law remedies and technological developments as a surer path to an appropriate system of enforceable rights for online data.
Data is different
Is "data" really the new oil?
In a sense, data "fuels" the operation of many online platforms. Businesses combine user-provided data on an individual's name, demographics, and interests with observed data of how individuals interact on their platforms to produce inferred data about a user's likely commercial habits that can be monetized for other parties.
Because it is data that keeps the whole operation monetarily afloat, and the harvesting of this data necessitates keeping tabs on information that some people may wish to keep private, some conclude that users should receive special government protections or revenue streams relating to the use of their data. As Sen. Josh Hawley (R–Mo.), a frequent tech critic, recently asked: "Isn't our data our property?"
But data, unlike oil, isn't rivalrous. Once someone uses oil, it's gone. But data is just information, and that information can be used and obtained in different ways. It is simply there, and it exists regardless of who uses it and how it is used.
Can I really "own" the right to make judgments about me based on the fact that I am an American woman in my thirties with interests in baby products and cryptocurrency? It's especially hard to make this case if I freely provided this information to a platform that makes use of this data. Things get dicier when it comes to the observed data of my associates that platforms may use to make inferences about me. But still, do I have the right to tell others not to behave in such a way that I am affected?
Once we think deeper about the implications of treating data like rivalrous tangible property, with all the attendant rights and duties, the problems from a libertarian perspective become clearer.
Why market-driven data rights haven't emerged
Because data is different, property rights have not emerged for data like they have for things like clean air and beaver furs. Berg and Davidson review the literature in law and economics to help shed light on why this is so.
First, data-driven online platforms are generally two-sided markets. Facebook serves two customer bases: normal social network users and advertisers. The profits made on the advertising side are used to cross-subsidize the free user experience. And users agree to divulge data to Facebook that it then monetizes.
Online platforms would not have grown to the impressive size that they have if they did not create a correspondingly great amount of value for both types of users. Really, when we worry about data, we're worrying about externalities, or the social costs or benefits that are not internalized in market exchange.
Why do people sign up for platforms in the first place? They'd like to capture some of the positive externality that advertisers are subsidizing: they are hoping that being on the platform will increase their chances of meeting their future spouse, or connecting with a business opportunity, or just enjoying chatter among friends. No one wants the positive externalities from platforms to go away.
What people worry about are the negative externalities: they don't want other parties' data or activities to adversely influence how they are treated. They don't want a sketchy data reseller to shuttle their information to an unauthorized party who targets junk to them. They don't want to get hacked. And they certainly don't want to be put on secret blacklists that limit their opportunities or liberty.
Complicating the picture even further is the fact that each person has their own idea of what a positive or a negative externality looks like when it comes to data. I cringe whenever someone shares a picture of me on a social media platform. Others love it. When governments try to make these decisions for us, a lot of people will inevitably be left unsatisfied.
In an ideal world, externalities problems can be resolved through property rights, which would accrue to the party that most values them—this is the insight of the economist Ronald Coase. But for the Coase theorem to hold, transactions costs must be minimized.
The problem, as the authors point out, is that those transactions costs appear great and quite hard to overcome. In fact, perhaps such a right has not yet emerged because it is "not particularly valuable" in the current market, since it means nothing without a way to maintain secrecy.
Governments don't really care about your rights
Okay, so can the government step in to kickstart the definition of such rights? Many see no other way. But as Berg and Davidson point out, governments are among the least trustworthy entities when it comes to protecting data privacy.
Consider the many surveillance and data retention policies operated by governments across the world. They have no intention of curtailing the public collection of private data when it suits their interests. Indeed, many of their surveillance programs rely on private data repositories as sources.
It is not surprising, then, that government forays into defining data rights have so far proven so inept. The GDPR, for instance, has mostly protected the market dominance of Google and Facebook in the European advertising market. Ironically, the GDPR has also been exploited by malicious actors to extract others' personal data or punish market rivals.
Really, the GDPR has mostly ended up as a way for European governments to extract revenue from foreign companies. The EU has no intention of ceasing their various member state surveillance programs and other mandates that data be processed in ways that further government goals. We shouldn't expect other top-down, government-driven regulatory regimes to do much better.
Common law and innovation are better remedies
This does not mean there are no solutions, nor that governments have no role in discovering an appropriate rights regime for online data.
It is clear that some kind of formalized regime articulating harm and remedy would be valuable. But GDPR-style regulation has proven unwieldy and counterproductive.
Berg and Davidson point instead to a common law approach advocated by classical liberals like Friedrich Hayek and Bruno Leoni as an "evolutionary and adaptive approach to managing social conflicts." Rather than trying to trying to identify and resolve all disputes through one-stop-shop legislation, an iterative common law approach allows us to iterate thoughtfully and learn from a developing body of precedent. It's not perfect, and it can't rush the market to develop property rights before they are adequately valuable, but it's at least a lot better than GDPR-like brouhahas.
In the meantime, what can we do? We are fortunate to live in a world with public encryption technologies. They are getting more accessible all the time, and innovators are applying these techniques to solve data externality problems in ways that governments can't. Berg and Davidson point out that these technologies empower users to effectively protect their "rights to data" even before such "rights" have officially emerged.
Take the problem of transaction tracing, where companies use your card purchases to build advertising profiles on you that are sold to other parties. We have always had good, old-fashioned cash to serve as a buffer to this commercial panopticon, but this was obviously limited to in-person transactions. Now digital cash, or cryptocurrencies, can provide privacy online and cut the trail of private transaction tracking well before governments have developed the wherewithal to effectively address this problem.
For many data problems, there already exists a pretty good technology solution. Sick of Google? Download Brave and use Startpage for search. Worried about OS tracking? Linux may be for you, and Ubuntu is a great and accessible option. For many years, these projects have been successfully building a world where users are in a better position to internalize the negative externalities (data tracking) that they wish to avoid, while enjoying the positive externalities (camaraderie, reputation management, and simple entertainment) that they feel is worth it for them.
Then there is the frontier. Consider a project like Urbit, which hopes to fundamentally change the user's relationship to how their data is processed. Instead of requiring users to navigate and interrelate a hodgepodge of the features and products listed above, Urbit aims to build data control into each component of the computing experience. Newer projects like these could provide even more robust user controls that help us make our own data tradeoffs even more precisely.
It is predictable that as we moved more of our lives online, more problems from the use of data would result. But it's also predictable that legal and technological solutions to these problems would emerge, as well, and this study is a helpful reminder of this potent and underappreciated force for progress on data privacy issues.
Post a Comment