Data protection law - An Introduction
3.5. Tentative answers: technology as a new tool of privacy protection and as an object of regulation
1. With the decrease of the effectiveness of legal protection, the importance of new technologies have risen – with the application of these technologies privacy protection can be carried out more effectively, but in certain cases traditional balances may be damaged (for example between the interest of criminal investigation and the protection of privacy). In these cases the legislator must intervene in order to restore balance. A new technology that may be used for the protection of privacy as well (but according to the above definition does not belong to the group of privacy protection technologies) is public key encryption.
2. For a long time state-employed mathematicians were the only ones to deal with the development of encryption algorithms, but as early as the 1970s, the need for the private use of encyption arose. The IBM had been carrying out researches in order to secure communication on the computer systems of the banking sector starting already from 1971: although the publication of their results was not welcomed by the National Security Agency, in the end the standard algorithm for symmetric encryption of data, the DES was born as a development reached by these researches. The point in symmetric encryption is the fact that the sender and the receiver both use the same key, and passing the key to the other party is problematic, therefore without a previous exchange of keys this system is still not suitable to ensure a safe communication among a large number of users of a computer network unknown to each other. Diffie and Hellman were searching for a solution to this problem, and they found one: they wrote their first article on public key encryption in 1976, while not much later Rivest, Shamir and Adleman came up with a functioning method realizing public key encryption.97 Thus, public key infrastructure was born, in which parties unknown to each other can make sure of the identity of the others if they are using a digital signature, and they can also send each other encrypted messages without a previous exchange of keys.
The user uses two keys in the public key system: a public and a secret one. The public key is publicized, while the secret key is known only to him. When generating an electronic signature the user signs the message (i.e. a version of it created in a special way) with his secret key, while the recipient can check with the public key of the sender that the message remained the same, and that it is indeed signed by the sender. In case of encryption the process is reversed: the sender encrypts the message with the public key of the recipient, after which it is only the recipient who can decode the message with his own secret key.98 Public key encryption is highly effective, and in case an RSA algorithm is used and the code is sufficiently long, it is impossible to decode the message using the available computer capacity, until there is a mathematical procedure that would facilitate the method of prime factorization breakdown.
The doctrine of shared information systems may be transferred to the field of electronic administration with the help of encryption technologies. But we should also keep in mind that the use of encryption technologies as such leads to further legal and regulation questions. The most important among these questions, in my opinion, is the question of regulating access to keys that make encryption possible.
Encouraging the use of encryption technologies on the one hand is crucially important in building the trust required for the development of electronic administration and for the enforcement of the right to personal data protection in this environment, thus it should be supported by the state. On the other hand the proliferation of encryption technologies may endanger secret data collection. With the spreading of such tools, regarding traditional channels of communication the balance between the interest of privacy protection on the one hand, and criminal investigation and national security on the other may be endangered. With the new encrypting technologies collection of secret information is impossible even with the resources available for the state.
What is exactly the danger that public key encryption poses on the interest of criminal investigation and national security? Public key encryption is user friendly, it can be used easily by anyone for developing software and hardware tools, as well as publicizing them on the internet – the first and best known among these softwares is the one called Pretty Good Privacy (PGP) written by Phil Zimmermann and available for free99 – and it made possible the protection of privacy and business secrets by using a communication tool based on digital technology. The dangers are pointed out by the ideas of some ideologists appearing in the United States, who call themselves cryptoanarchists. Tim May, the representative of cryptoanarchists writes the following:
“Just as the technology of printing altered and reduced the power of medieval guilds and the social power structure, so too will cryptologic methods fundamentally alter the nature of corporations and of government interference in economic transactions. Combined with emerging information markets, crypto anarchy will create a liquid market for any and all material which can be put into words and pictures.”100
It was also Tim May who posted on the internet anonymously the fictive call of an organization called BlackNet. The identity of BlackNet is a key available only at the internet, and it enables users to send each other encrypted messages. What is this network dealing with?
“BlackNet is in the business of buying, selling, trading, and otherwise dealing with *information* in all its many forms. We buy and sell information using public key cryptosystems with essentially perfect security for our customers. Unless you tell us who you are (please don't!) or inadvertently reveal information which provides clues, we have no way of identifying you, nor you us. Our location in physical space is unimportant. Our location in cyberspace is all that matters. Our primary address is the PGP key location: [...]BlackNet is nominally non-ideological, but considers nation-states, export laws, patent laws, national security considerations and the like to be relics of the pre-cyberspace era. Export and patent laws are often used to explicitly project national power and imperialist, colonialist state fascism.”101
The organization later specifies in the manifesto its things of interest: business secrets, inventions, any valuable information. It also gives a description of the procedure of payment. This fictional organization has even its own currency, and anonymity is secured during the entire process of business transactions, while the system may be used for anything – be it legal or illegal. The system described by May, although a lot of people took it seriously, did not exist in reality. Still, very soon the so-called “anonymous remailers” became popular: these are servers through which users may send and receive information in a way that their identity will remain hidden for the receiver, provided that each server is in a different country.
A much larger and real threat compared to the revolutionary ideas outlined in the theories of cryptoanarchists is the use of encryption by groups of criminals. By the use of these tools criminals may make the tapping of their communication impossible (they could not do it using traditional means of communication), or they can get hold of data stored digitally by the authorities, which can be used for proving a crime or they can reach the source of information concerning a crime.
The American government reacted very promptly after evaluating the possibility of “cryptoanarchy”: The Clipper chip developed with the help of the National Security Agency was a hardware tool which, if built in computers and telephones, could realize a high level of encryption, but it also allowed the access of organs of criminal investigation and national security: every key had an additional second key for its decoding, which was divided into two by the manufacturer of the chip, and the two parts were put “in escrow” at government organs. Organs of national security and criminal investigation could get hold of the two parts of the key from the two organs by providing the required permits, and with the help of he combined key they would have decoded the message.102 This is the so-called “key escrow” system.103 The idea is appealing,104 since it ensures criminal investigators the use of common tools of traditional communications systems with the required guarantees. Still, the Clipper chip proved to be a big flop. Human rights organizations and lobby groups of the information industry launched a fierce attack against the program following its announcement by the Clinton administration at spring 1993 – according to which the administration supported the system of key escrow based on voluntary use (manufacturers of the tools equipped with the chip also received large government commissions).
There are several counter-arguments. The technological realization of the key escrow endangers the security of the encryption (not much after than the program had been announced, a security defect was found in the realization of the Clipper chip), the devices equipped with the tool would not be exportable, since the foreign market would not accept the “back door” open only for American authorities. The operation of key escrow would clearly entail high costs. The biggest problem, however, is that the control of the encrypted message traffic remains impossible even with the use of key escrow: the message decoded with the help of the required keys may contain another encrypted message, but it is also possible that the coded message itself is hidden in the bits of a picture or sound file modified according to certain rules. The system would, thus, weaken security in an unnecessary manner – as the arguments of the opponents went. The American government was experimenting with the modification of the suggested system: according to subsequent plans private companies could have become trustees keeping keys instead of government bodies. But the development of the infrastructure based on the system of key escrow, after the USA failed to persuade its European allies about the necessity of an international system of key escrow, was taken off the agenda.105
In Europe the idea of creating a system similar to the American also appeared, but since the 1997 European Commission Communication “Towards A European Framework for Digital Signatures And Encryption”106 there have been no European initiatives trying to limit the use of cryptograpy. Member states also tend to avoid the use of inner encryption: according to the cryptography-policy accepted by the German government in 1999 the use of strong encryption should be explicitly encouraged; France levied its earlier legal limitations on encryption in 1999. The system of key escrow is unknown to the regulations of Great Britain, but according to the Regulation of Investigatory Powers Act adopted in 2000 anybody owning an encryption key may be obliged to disclose the key under specified conditions.107
3. The primary goal of the development of privacy enhancement technologies (PETs) is not the creation of data security independent of data content, but specifically the protection of privacy. While the legal regulation of technologies serving data security may effect the level of privacy protection, PETs are related directly to the data protection regulations understood in the narrow sense: this regulation may stimulate their application, and their application may make data protection law more effective, and ensure its effectiveness in the new context pervaded by the flow of digital data. The need for developing privacy enhancing technologies was raised in literature already at the time of the adoption of first generation data protection norms, and during the debates concerning the third generation regulations it became a popular topic.108
In Burkert’s typology there are four types of privacy enhancement technologies.109 Under subject-oriented solutions he understands the ones which concentrate primarily on rendering difficult the restoration of the link between data subject and data (an example for this is a credit card system which uses the number of the credit card during transactions, while the link between the user of the card and the card number is coded – in this example with public key encryption – and may be restored only by the bank and the data subject together). The object–oriented solution builds on the idea that the object of the transaction does not carry any information about the persons involved (such is the example of payment in cash or in-kind transaction in a traditional context). Transaction-oriented solutions focus on the information that is created during the transaction: for example instruments that destroy data after a certain period of time. The fourth solution integrates the above three into a system, called system-oriented technology, which creates zones of interaction where the identity of subjects is hidden, the objects exchanged bear no traces of those who get hold of them, and no record of the interaction is created or maintained.110Burkert’s example for a traditional context is the Catholic confession or anonymous interactions with crisis intervention centers, while in an electronic environment it is communication through encrypted messages that are subsequently destroyed by all parties.111
4. P3P is understood here as a privacy enhancement technology, while some experts discuss it as an example of industrial self-regulation.112 In our view the application of this technology is neutral, although in its development a significant role way played by the efforts of the actors of the market focusing on the prevention of state regulations; its application is possible both as industrial self-regulation and as state regulation.
The World Wide Consortium (W3C),113 having more than 400 members – including universities, businesses and other organizations – is playing a very active role in the development of self-regulation possibilities of the internet. The Platform for Internet Content Selection (PICS)114 proposed by the W3C came into the limelight after the Supreme Court had declared unconstitutional the Communications Decency Act of 1996, the act that was supposed to regulate internet content. Since P3P, in many respects, is a technology parallel to this one, a short introduction on the most important characteristics of the PICS is needed here.
The PICS is a specification kit with which standard softwares used for internet access may filter the accessible materials according to specifications set by the user, and block the access to those that the user categorizes as “unwanted.” The rating of the content in question has to be done in advance. Rating may be carried out according to any rating system reflecting different specifications or systems of value, and might be set by the creator of the file or a third party (third-party rating service, “label bureau”); a given content may be rated according to any number of different rating systems. During the process of rating the provider of the content or the rating service assigns a tag to the given material based on the given rating system. If the rating was carried out by the creator of the file, the tag is included in the given file (for example the HTML-code of the web page); in case the tag is given by the rating service, the tag is in the file of the rating service, but may naturally be assigned to the given file. The software set according to the requirements of the user controls the tag assigned by the rating service when downloading the given file (either in the file in question or in the registry of the chosen rating service), and in case the material’s tag does not correspond to the requirements previously set by the user, access is banned. At present PICS can be used for filtering www-pages, files with ftp or gopher protocols, and messages of Usenet newsgroups.
The goal with the creation of PICS was explicitly to replace the state regulation instruments similar to the Communications Decency Act, by creating the possibility of a kind of self-regulation. The illusion that was shared by many at the early stages of the history of internet, according to which the net is a “lawless space,” and falls outside the sovereignties of states, contributed significantly to this idea. It should be noted that after the euphoria regarding self-regulation and the declaration of the CDA as unconstitutional, and after the celebrations of the right-protection organizations initiating the process were over, other opinions surfaced as well, according to which state regulation actually provides better guarantees compared to the Platform for Internet Content Selection, which was intended to be an instrument of self-regulation based on the preferences set by the user, but could be used for abusive control by the internet providers or even the state.115 The PICS could be used for rating, tagging and filtering based not only on the content of pages, but regarding any of their characteristics; it was the direct ancestor of P3P. An American university professor, Joel Reidenberg was rating web sites with the help of PICS, based on the requirement whether they comply with the data protection norms set by the Canadian Standard Association.116
The Platform for Privacy Preferences (P3P) developed by the W3C, in order to ensure a more effective application of data-protection preferences set by the user, applies technological solutions that are very similar to the ones used by PICS.117 The given web site rates the applied data-protection practice according to set criteria. This information has to be included in the code of the web page in the required XML format, so that it can be detected by the browser. Based on all these the user visiting the page will be informed about 1. the types of data that the page is collecting about him, 2. the purpose that these data are used for and 3. the applications in case of which he can exercise his right, no matter if it is by positive consent (opt-in), or objection (opt-out). The user may set his data-protection preferences in advance: he may specify, for example, that he does not want to visit pages containing cookies, pages that require the user to reveal personal data, or pages that forward the data to a third party. The specifications of the P3P allow the formulation of highly complex data-protection regulations as well: it might be set, for example, which law is to be applied in case of debates concerning the given web page.
Users may install P3P both in a web browser and a proxy-server. As for the server, the operator of the web site may use a software that automatically generates a P3P code complying with the data-protection regulations, based on the answers given to the question posed, and include this code in the code of the web page.
The P3P has been included since summer 2001 in the 5.0 version of the internet Explorer browser, and the 6.0 version can filter cookies according to preferences set by the P3P system, and make a report on the data-protection rules applied by the web page, if the page has a policy coded by P3P.118
Similarly to PICS, P3P received a large attention within the European Union as well. First the European Commission DG XV issued an opinion in which it suggested that P3P vocabulary be expanded. In September 1999 the working party established by Article 29 of the EU data-protection directive met with the designers of P3P in Brussels. Prior to this meeting the working party had issued an opinion according to which P3P will not be sufficient in itself to protect privacy on the Web.119 This is indeed true: P3P is only a technology, which may be a tool to enforce data protection norms. Its use, however, within the context of European data-protection legislation, in which the data protection policies of a given web page may be realized only within the framework of enforceable data protection rules, is certainly useful. In my opinion the application of P3P on government web pages may contribute to building confidence in users, and to building the requirement that other web page operators provide the same information as is offered by web pages run by the government.
According to Schwartz an obstacle to the effectiveness of P3P may be created by the practice characterizing the American context (in which the default rule followed by web pages is the maximum use of data) where the user of a filter system blocks out most of the web’s content.120 This solution, however, supported by state regulations, may be used for enforcing privacy-protection to an extent that is set by the user.
5. It is a fact, though, that we have been reading about privacy enhancing technologies in the narrow sense in literature for decades without seeing them widely applied. Privacy enhancing technologies cannot be a substitute for public policy, only a supplement to it.121 Burkert has an interesting explanation about the root of the barriers to privacy enhancing technologies. Among other things he sees it in the age where the traditional social ties of an individual loosen, creating a strong urge to re-establish and maintain these ties, which may lead even to a stage where the anonymity provided by these technologies loses its appeal.122