Revocable Privacy
Abstract
Security and privacy are often seen as opposite, irreconcilable, goals. Privacy advocates and security hawks cling to rigid viewpoints, fighting each other in an ageing war of trenches. As a result, measures to increase our security scorn our privacy. And privacy enhancing technologies do very little to address legitimate security concerns. Revocable privacy aims to bridge the two sides of the debate to break the status quo. Revocable privacy is a design principle (including the necessary toolbox) to build information systems that balance security and privacy needs. The underlying principle is to design a system that guarantees the privacy of its users, unless a user violates a predefined rule. In that case, (personal) information will be released. Laws and regulations by itself are insufficient: they can be changed or sidestepped later on. That is why the principle of "code as code" is taken as point of departure: the rules and regulations must be hard-wired into the architecture of the system itself.
Introduction
Privacy is a hard concept to define [25, 12]. It has many dimensions (locational privacy, physical privacy, informational privacy, etc.) and is very context dependent: its importance and meaning depend on social-cultural backgrounds, and individual preferences. Moreover, technological developments (like the invention of the computer and the rise of the Internet) have had profound implications on the privacy of citizens in a society. Similarly to freedom of speech, privacy is not only a personal value but also a societal value. It is a prerequisite for realising all of one's own potentials and to be able to develop one's own opinion, which in turns contributes to the development and innovation of society as a whole. See Solove [21] for a thorough discussion on this. It is therefore protected by laws and regulations, both at a European level [10] and by nation states [26].
Our focus is on the informational dimension of privacy, and we loosely define privacy as the right (and the means) to control the release of personal information even when that data is already stored by a third party. Technical definitions for several aspects of privacy, like anonymity, unlinkability, undetectability, unobservability, pseudonymity, and even identity management are given by Pfitzmann and Hansen [20]
Security is also an important value for society, and one that has grown considerably in importance over the last decades. Even before the gruesome terrorist attacks on the World Trade Centre in New York on September 11, 2001, large numbers of surveillance cameras were being installed in public places to prevent crime. Already back then, the Netherlands were the number one phonetapping nation in the world [16]. To combat terrorism and to stop cybercrime security is ranked high on the political agenda these days.
Unfortunately security and privacy are seen as each others enemies. It is thought, without ground, that one cannot be achieved without sacrificing the other. Privacy advocates and security sharks are stuck in a war of trenches. Given the high political importance given to homeland security these days, this has resulted in approaches to increase societal safety that disregard the privacy of the citizens. Similarly, when designing privacy enhancing technologies (PET), no attention is being paid to the quite reasonable request to also consider societal security issues.
Legal or regulatory attempts to remedy the situation are inadequate. Rules and regulations may change over time, allowing for more possibilities to gather information about people after the fact. Such "function creep" occurs frequently: once the system, in principle, allows certain ways to collect data, sooner or later politicians or police investigators will ask for an extension of powers. Therefore, the solution must be found in limiting possibilities at the outset, through technical means, in the architecture and design of the system. Unfortunately, because of the high importance given politically to security, security countermeasures are being implemented without much regard for privacy. Examples, in the Netherlands and elsewhere, include camera surveillance, proposals for road pricing systems, smart payment systems for public transport, identity management systems, and the like.
To balance the security and privacy needs and achieve a reasonable trade off [27], we are developing revocable privacy. In this paper we present some models and definitions, indicate viable technical approaches, and show how they can help to resolve the status quo.
Defining revocable privacy
As argued above, there is an inherent conflict of interest between security proponents and privacy advocates. We aim to strike a balance between security and privacy needs when developing large scale information systems. Legal or procedural approached are insufficient, because they can easily be circumvented or changed. Proper solutions therefore need to be developed at the architectural level, using techniques for revocable privacy. In essence the idea is to design systems in such a way that no personal information is collected, unless a user violates the (pre-established) terms of service. Only in that case, his personal details and when and how he violated the terms are revealed. The data is only revealed to authorised parties, of course.
We define revocable privacy as follows.
Definition of Revocable privacy. A system implements revocable privacy if the architecture of the system guarantees that personal data is revealed only if a predefined rule has been violated.
This definition can be refined by requiring that
- only data of the perpetrator is revealed,
- this data is only revealed to the relevant authorities,
- without consent of the perpetrator, and
- sometimes even without his knowledge.
Also, the guarantees should hold for the design and implementation as well. We distinguish two variants of revocable privacy.
- Spread responsibility
- One or more trusted third parties verify whether all conditions for releasing personal data have been met, and grant access (or release the data) if this is the case.
- Self-enforcing architecture
- The rules to release data are hard-coded into the architecture. If the rules are violated, the data is released automatically. If no rules are trespassed, no information can be obtained at all.
The idea of revocable privacy is certainly not a new one: back in 1988 Chaum et al. [7] proposed a scheme for off-line digital cash where double spending a coin would reveal the identity of the owner of the coin. He did this in the context of more general work of privacy enhancing technologies [5, 6]. More recent examples are limiting the amount one can spend at a single merchant while remaining anonymous [3], or revoking anonymity of users that do not pay their bill at a merchant [2]. General techniques are scarce however, and their value in real world application has not been properly investigated.
Existing techniques for revocable privacy
Many of the techniques currently in use for revocable privacy are based on the use of trusted third parties. By spreading the power over many such parties (using secret sharing techniques or similar), one can mitigate the likelihood of corruption or subversion. However, such systems are in essence still procedure based: by changing the procedures and replacing the trusted parties, one can still change the rules of the game. We therefore believe that self-enforcing approaches to revocable privacy are the way forward. There is no need to trust an entity anymore in this case (which is a problem in third-party based solutions as argued above [9]). This line of reasoning follows the idea of "architecture is politics" and "code as code" [19], and is inspired by the "Select before you collect" principle [15, 14]. Our approach is privacy respecting [17] but not unconditionally privacy guaranteeing.
The PhD thesis of Marcus Stadler [22] from 1996 on revocable privacy serves as a valuable starting point. This provides us with the following toolbox of cryptographic primitives for revocable privacy.
- Fair blind signatures (defined by Stadler et al. [23] and further developed by Abe and Ohkubo[1]. Blind signatures allow a signer to sign a message, such that the signer cannot later link its signature to the message it signed. In essence it corresponds to "blindly" signing a document. Fair blind signatures allow the signer to recover this link (possibly revoking privacy in this case), but only with the help of a trusted third party.
- Publicly verifiable secret sharing [8] allow all parties to verify that the necessary information is indeed properly distributed but without actually revealing that information (cf. [24] for an example that applies to key escrow in communication networks).
- Auditable tracing [18] is a concept that aims to prevent unlawful tracing by trusted third parties, by making the tracing itself (and whether that tracing was done on a sound legal basis), detectable by the users of the system. The assumption is that abuse of the tracing abilities of the system is prevented because any unlawful use will be discovered after a certain time.
The latter method has been successfully applied to implement an offline traceable payment scheme. The main drawback is that abuse is not prevented outright, and that abuse of the system may surface only much later, only after a lot of damage to particular persons may have been done.
Perhaps, anonymity control could also be implemented in pseudonymous identity management systems. That would help to decrease resistance (from government) against such systems. However, the anonymity control in idemix [4] is quite rudimentary, as it is based on an all-or-nothing approach, and only prevents against pooling of credentials among users.
New approaches
There are also other techniques that appear to be promising. One example are so called homomorphic cryptographic techniques, like threshold encryption [11], where a group of n users encrypt a value using their own private key, with the property that in order to decrypt the value (using a single public key) one needs at least t different encrypted values.
To see how threshold encryption can help achieve revocable privacy, consider the following real-life example. So called "canvas cutters" are criminals that roam the parking places along highways looking for trucks with valuable content (by cutting the canvas). To identify possible canvas cutters, one could set up ANPR (Automatic Number Plate Recognition) systems at the entry of each parking space, and search the resulting data stream for cars that entered multiple parking spaces along the same highway on a single day. Apart from identifying police cars and the AA (that have similar driving patterns), this should identify canvas cutters as well. Clearly this poses a privacy threat, as data of all cars visiting a parking place are retained. One could choose to retain the data coming from a single ANPR system for only a couple of hours. But this is only a procedural measure. Another option is to encrypt all number plates recognised by each ANPR system immediately using a threshold encryption scheme, and only store these encrypted number plates. Setting the threshold at a suitable level, e.g., 3, the authorities only retrieve number plates of cars that visited at least 3 parking spaces on a single day along a single stretch of highway.
Another approach that deserves further study is the work of Galindo and Verheul [13] on pseudonymous data sharing. Their system allows so-called researchers to execute queries over combined data coming from different suppliers, that register data against mutually disjoint pseudonyms. So-called accumulators, that are semi-trusted, stand between the researchers and the suppliers to combine data that belong to the same entity using knowledge of the underlying structure of the pseudonyms.
References
- Abe, M., and Ohkubo, M. Provably secure fair blind signatures with tight revocation. In ASIACRYPT (2001), C. Boyd, Ed., vol. 2248 of Lecture Notes in Computer Science, Springer, pp. 583-602.
- Camenisch, J., Groß, T., and Heydt-Benjamin, T. S. Rethinking accountable privacy supporting services: extended abstract. In Digital IdentityManagement (2008), E. Bertino and K. Takahashi, Eds., ACM, pp. 1-8.
- Camenisch, J., Hohenberger, S., and Lysyanskaya, A. Balancing accountability and privacy using e-cash (extended abstract). In SCN (2006), R. D. Prisco and M. Yung, Eds., vol. 4116 of Lecture Notes in Computer Science, Springer, pp. 141- 155.
- Camenisch, J., and Lysyanskaya, A. An efficient system for non-transferable anonymous credentials with optional anonymity revocation. In EUROCRYPT (2001), B. Pfitzmann, Ed., vol. 2045 of Lecture Notes in Computer Science, Springer, pp. 93-118.
- Chaum, D. Security without identification: Transaction systems tomake big brother obsolete. Comm. ACM 28, 10 (1985), 1030-1044.
- Chaum, D. Achieving electronic privacy. Scientific American (aug 1992), 96-101.
- Chaum, D., Fiat, A., and Naor, M. Untraceable electronic cash. In CRYPTO (1988), S. Goldwasser, Ed., vol. 403 of Lecture Notes in Computer Science, Springer, pp. 319- 327. 6 Jaap-Henk Hoepman
- Chor, B., Goldwasser, S., Micali, S., and Awerbuch, B. Verifiable secret sharing and achieving simultaneity in the presence of faults (extended abstract). In FOCS (1985), IEEE, pp. 383-395.
- Claessens, J., Dí5 Conclusions & recommendations todo: conclusions az, C., Goemans, C., Dumortier, J., Preneel, B., and Vandewalle, J. Revocable anonymous access to the internet? Internet Research 13, 4 (2003), 242-258. MCB UP Ltd.
- Commission, E. Directive 95/46/ec of the european parliament and of the council of 24 october 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, Nov. 1995.
- Fouque, P.-A., Poupard, G., and Stern, J. Sharing decryption in the context of voting or lotteries. In Financial Cryptography (2000), Y. Frankel, Ed., vol. 1962 of Lecture Notes in Computer Science, Springer, pp. 90-104.
- Freid, C. Privacy. Yale Law Journal 77, 3 (1968), 475-493.
- Galindo, D., and Verheul, E. Microdata sharing via pseudonymization. Tech. rep., Joint UNECE/Eurostat work session on statistical data confidentiality, 2007.
- Jacobs, B. Select before you collect. Ars Aequi 54 (Dec. 2005), 1006-1009.
- Jacobs, B. De menselijke maat in ICT. ISBN 978-90-9021619-5, Jan. 2007.
- Koops, B.-J. Strafvorderlijk onderzoek van (tele)communicatie 1838-2002: het grensvlak tussen opsporing en privacy. Kluwer, Deventer, 2002.
- Köpsell, S., Wendolsky, R., and Federrath, H. Revocable anonymity. In ETRICS (2006), G. Müller, Ed., vol. 3995 of Lecture Notes in Computer Science, Springer, pp. 206-220.
- Kügler, D., and Vogt, H. Offline payments with auditable tracing. In Financial Cryptography (2002), M. Blaze, Ed., vol. 2357 of Lecture Notes in Computer Science, Springer, pp. 269-281.
- Lessig, L. Code and othert laws of cyberspace. Basic Books, 1999.
- Pfitzmann, A., and Hansen, M. Anonymity, unlinkability, undetectability, unobservability, pseudonymity, and identity management - a consolidated proposal for terminology (version v0.31 feb. 15, 2008). http://dud.inf.tu-dresden.de/Anon_Terminology.shtml.
- Solove, D. J. "I’ve got nothing to hide" and other misunderstandings of privacy. San Diego Law Review, 44 (2007), 745.
- Stadler, M. Cryptographic Protocols for Revocable Privacy. PhD thesis, Swiss Federal Institute of Technology, Zürich, 1996.
- Stadler, M., Piveteau, J.-M., and Camenisch, J. Fair blind signatures. In EURO- CRYPT (1995), pp. 209-219.
- Verheul, E. R., and van Tilborg, H. C. A. Binding ElGamal: A fraud-detectable alternative to key-escrow proposals. In EUROCRYPT (1997), pp. 119-133.
- Warren, S. D., and Brandeis, L. D. The right to privacy. the implicit made explicit. Harvard Law Review IV, 5 (Dec. 15 1890), 193-220.
- Wet bescherming persoonsgegevens (wbp).
- Wright, R. N., Camp, L. J., Goldberg, I., Rivest, R. L., and Wood, G. Privacy tradeoffs: Myth or reality? panel. In Financial Cryptography (2002), M. Blaze, Ed., vol. 2357 of Lecture Notes in Computer Science, Springer, pp. 147-151.
Last Version - Tue Oct 26 11:53:16 2021 +0200 / e1e3326.
(Note: changeover from CVS to dotless svn version numbers on Jan 19, 2008, and changeover to GIT versioning on May 30, 2013.)
Maintained by Jaap-Henk Hoepman
Email: jhh@cs.ru.nl