Home Blog How can the regulator balance concerns about the use of data vs. the desire for economic progress?
How can the regulator balance concerns about the use of data vs. the desire for economic progress?

How can the regulator balance concerns about the use of data vs. the desire for economic progress?

180
0

 

Presentation by Cordel Green Executive Director of the Broadcasting Commission – Jamaica

At International Regulators Forum (IRF), Vienna, October 6, 2014

(These views are not necessarily those of the Broadcasting Commission)

In looking at the power to be harnessed from big data, the World Economic Forum in a 2012 paper suggested that “by analysing patterns from mobile phone usage…we [could]… predict the magnitude of a disease outbreak half way around the world, allowing aid agencies to get a head start on mobilizing resources and therefore saving many more lives.”

Four years ago a study of 2400 banks in 69 countries, found that greater information sharing among the banks led to greater profitability in the banking sector, reduced bank risk, a reduction in the possibility of a financial crisis and ultimately, economic growth (Houston et al.).

So, there is no question that data use is having a far reaching impact on economic growth and development by facilitating the improved delivery of services and better productivity performance.

However, there needs to be a balance between access to data, on the one hand, and meaningful use and dissemination of data, on the other.

We are each guaranteed the inalienable right to communicate ideas and opinions freely and also to be free in our ability to receive such communication (freedom of expression/communication). But we also expect that the ideas and opinions we have should be free from interference by persons who are not intended to be recipients of those ideas and expressions (“Data privacy”) (see Guy Berger et al.).

But, in a world where persons download applications which disclose their every location and where they post every shred of information about themselves, what does privacy mean?

I believe if you were to ask, most people might agree to willingly “surrender” or “sell” their private information but not have it “taken” or “stolen” from them. So, our concept of privacy is predicated on the expectation that we should have an inviolate ability to “determine for ourselves when, how, and to what extent information about us is communicated to others” (Westin).

However, our expectation of privacy is being made unrealistic by technological advances. Information is being amassed on an unprecedented scale and most people have no knowledge of when, the nature or extent to which information about them is being stored, access and shared.

At the moment, information is being gathered and stored largely as a machine to machine activity. But, if trans-humanists such as Ray Kurzweil, chief engineer at Google, are right, we are moving towards a singularity and in that process, people and machines will merge. What does that portend for privacy?

The trans-humanist author Gennady Stolyarov, would have us not be concerned. Very few consumers, he says, would agree to purchase any kind of machine augmentation if they saw it to have severe risks to their privacy (Guardian). This is nothing short of incredulous.

If we agree that the use of technology and concept of privacy are shaped by the norms of society, then trans-humanist expectations that privacy will be protected by human behavior, is highly questionable, since the use of technology today lacks attachment to many traditional values.

I believe that a categorization of privacy policy considerations is one useful route to a just and proportionate response to concerns about infringement of privacy.

One important policy consideration is whether personal information should be treated as property. The law recognizes the distinct tort of misappropriation of personality but this is limited to the commercial value of the image and likeness of celebrities. But we are all now celebrities. Regular folk amass legions of “followers” and “friends” who are really “fans” in twitter-land, Facebook, Instagram and other social media.

It is therefore not an unreasonable proposition that in a world where there is increasing commercial value in information about ordinary persons, and “…a strong tendency to ‘propertize’ everything in the realm of information” (Mark Lemley), ordinary people should be assigned a property right in personal information about themselves (Lessig).

Some would argue that this will create an impossible situation, for example, news agencies being prohibited from publishing information about persons without their prior permission. But there can be exceptions to and limitations on property rights in personal information.

There is also the question of whether data protection legislation is an appropriate framework for protecting privacy. I am inclined to the approach that focuses on regulating the “use” of data and not “protection of privacy”, simply because of the reality that “preventing data processing is no longer valid in the current networked database society” (Jaap-Koops). So, data protection should “be focused on decent treatment in the data usage stage” rather than “prevention in the data collection stage” (Jaap-Koops).

A related point is that in a networked database society, many individuals readily agree to ‘privacy policies’ in order to be and to stay ‘connected’. These policies give data controllers access to location, contact files, browsing history and personal details posted online.

The privacy agreements oftentimes give the data controller the right to share data collected and to combine information from different services such as Google Maps and Google Chrome, all in an effort to provide improved services to the data users. (See Google Privacy Policy)

Studies have shown that a majority of internet users either do not read privacy policies before agreeing to them, have little or no idea what the policies actually mean, or see no need to read privacy policies since it is assumed that legislation would not allow data collectors to misuse their data. (Morran, Dachis)

Researchers at Carnegie Mellon University found that the average privacy policy is 2500 words and take an average 10 minutes to read. (Out-Law.com) These privacy policies are not only lengthy, but difficult to read and therefore do not support rational decision making.

Adapting Lord Denning’s ‘Big Red Hand Test’, some of the privacy clauses would need to be printed in red ink, with a red hand marked “danger” pointing at each line, before they could be held to be sufficient (Spurling v Bradshaw).

Moreover, it is those on the underside of the digital divide who are most vulnerable to exploitation and violation of privacy. Social justice requires the society to develop policies and practices for their protection.

Approaches such as those crafted in the European Commission’s Data Protection Regulation and The UK’s Data Protection Act are worthy templates for how data protection can be undertaken within a framework of regional co-operation and in a manner that is consistent with the ideals of human rights. However, they presume a level of digital literacy that does not exist in every country.

We must therefore take care to avoid the tendency to apply other country systems in a “one size fits all mode” to developing country contexts.

There is also justifiable consternation and skepticism about the extent to which the state should or can be extended in a digitalized, database, networked world.

In these matters, the regulator’s role is to craft or influence policy and regulations that are fit for purpose: realistically responsive to the realities of the modern information eco-system and framed around the primacy of the ordinary citizen as data creator.

(180)

Pin It on Pinterest

Shares
Share This