Tuesday, May 6, 2008

Back to Basics: Privacy Ethics (part 2)

So, let's continue the journey through the 'what and why' of privacy. The second argument that Jeroen van den Hoven makes in his contribution Information Technology, Privacy and the Protection of Personal Data is that 'identity relevant information' should be protected, not any data about people (see: Van den Hoven/Weckert (e.d): Information Technology and Moral Philosophy, Cambridge University Press 2008).

Argument number 2: The scope of data protection should cover 'identity relevant information'.

Van den Hoven argues that the European definition of personal data is referential. In order to be 'personal data', the data need to be about a specific person, not just any person. This means that personal data need an identity-relevant context, regardless whether the context is right or wrong. Without such context data have no meaning and are just attributive: they would describe a situation or fact without reference to any specific individual. One could also argue that attributive data are conditional; they only become personal data if another identity-relevant condition occurs, for instance because the raw data is placed in an identity-relevant context or is combined with another piece of identity relevant information.

Because of the narrow definition of personal data in the European Data Protection Directive 95/46/EC, Van den Hoven concludes that attributive data go unprotected, but may very well be used to harm people with the assistance of new technologies such as data mining, profiling, etc. The Article 29 Working Party has recently tried to protect attributive data by dramatically extending the definition of personal data to data that are only conditionally identifiable. The most striking example of this extention to attributive data is example 16 about a collection of graffiti paintings in its Opinion on the Concept of Personal Data (Opinion 4/2007, WP136).

However, this opinion of the Article 29 Working Party has serious consequences. Data protection law does not only tells us what should be protected and why, but also how. It is the 'how' where the problem lies, especially in the case of attributive data. By bringing attributive data under the protection of the Data Protection Directive, all the formalities that come with this protection regime, like notifications, privacy notices, data export restrictions, prior checking, etc, are also triggered.

The argument for bringing attributive data under the protection of the Data Protection Directive is that the Working Party [quote]"assumes that the data controller or any other person has the means likely reasonably to be used to identify the data subject" [unquote]. In other words, there is probably always somebody somewhere who can re-attach an identity-relevant context to what -for most of us- is only meaningless information. For this very reason, some Data Protection Authorities in Europe do not consider key-coded data to be anonymous.....: if somebody has the key, he or she can always apply an identity-relevant context to the coded data, and therefore such coded data should be considered 'personal data' and treated in full compliance with the Directive.

The same argument is used to bring dynamic IP-addresses under the full protection of the Directive; the Internet Service Provider can always find out who was using a particular IP-address at a particular time. The fact that the government has given itself broad powers to collect information from private databases and to combine this information with other information, is the main argument for the position that electronic footprints, such as leaving an IP-address when visiting a website, are never completely anonymous and should thus be protected by the data protection laws.

This argument directly impacts businesses that collect attributive information to improve their products and services. They don't care who made the "electronic footprints" that their customers leave behind. As long as they have no intention of sharing this information with other parties or to use the data to harm their customers, there is little privacy risk for such data.

And here we uncover the key issue in the underlying ethical and legal debate that is currently going on. As soon as a piece of information about a person is disclosed, the privacy of that piece of information is -in principle- lost forever (see also Van den Hoven). If the party that holds that piece information does not know to whom that piece of information belongs and has no intention of ever finding it out, the likely impact on the privacy of that person -now and in the future- is close to zero. It is a 'footprint' made by 'somebody'.

However, the very fact that another party, most notably the government, can force the keeper of attributive data to release the data, triggers the slumbering privacy risk. This third party may be able to attach identity-relevant information to the otherwise anonymous data. This could be done by various means, such as forcing the disclosure of the key to unlock the data, pattern recognition and data mining, or combining various pieces of data with identity-relevant information that is already in the database. According to Van den Hoven such information is then used in another "sphere of access". It is the crossing into that other sphere where the slumbering privacy issue comes to the surface.

So, the moral problem that is presented here is: Should people's "footprints" that are only attributive be protected ?

My answer to that question is a affirmative "Yes", for the simple reason that the slumbering privacy risk may reveal itself at any time, any place, sometimes intentionally, but very often by accident. Therefore, people who have access to the "footprints" of other people, should be careful as to what part of this information to reveal to the outside world, because they have no way of knowing whether the data may have identity-relevant meaning to others. Speaking "hypothetically" about a real case with strangers at a party, or putting your holiday pictures of strangers on the Internet, could be a risk to privacy and should therefore be avoided.

BUT, unlike the Article 29 Working Party, I don't think that it is necessary to bring such data under the scope of the Data Protection Directive by default. They could very well be protected by criminal law or civil/tort law, so as to address any harm that is inflicted on the individual by the misuse of these data. Where somebody intentionally or accidentally has revealed such "footprints" to other people, and by doing so has brought such data outside the original "sphere of access", the victim may very well have a valid claim on that person for violation of privacy, if this piece of information is later used to harm that person. But bringing ALL attributive footprints within the scope of data protection legislation is one big step too far.

No comments: