Electronic and aerial surveillance, biometrics, closed circuit TV (CCTV), identity cards, radio frequency identification (RFID) codes, online security, encryption, the Google ‘right to be forgotten’ controversy, interception of email, the monitoring of employees, DNA, cloning, stem cell research, the ‘war on terror’—to mention only a few—all raise fundamental questions about privacy. Reports of the fragility of ‘privacy’ have, of course, been sounded for at least a century. In respect of the future of ‘privacy‘, there can be little doubt that the questions are changing before our eyes. And if, in the flat-footed domain of atoms, we have achieved only limited success in protecting individuals’ privacy, how much better the prospects in our binary universe? An account of some of the major forms of intrusion is provided, and controls over their use proposed.
Scarcely a day passes without reports of yet another onslaught on our privacy. Our brave new digital world has all but annihilated a right we once took for granted. The ubiquity of Big Brother no longer shocks. ‘Low-tech’ collection of transactional data in both the public and private sector has become commonplace. In addition to the routine surveillance by closed circuit TV (CCTV) in public places, the monitoring of mobile telephones, the workplace, vehicles, electronic communications, and online activity has swiftly become widespread in most advanced societies. In the late 18th century, the Utilitarian philosopher and law reformer, Jeremy Bentham, developed the idea of the ‘Panopticon’—a building that enabled a single guard to observe the inmates of an institution unawares: the mere fact that they know they may be under surveillance results in their acting is if they were. This would ensure, he argued, that they conducted themselves appropriately. The concept is invoked figuratively by the French social theorist, Michel Foucault, in his celebrated book, Discipline and Punish, to describe contemporary ‘disciplinary’ society’s proclivity to observe and ‘normalize’. The Panopticon, in his view, generates an awareness of perpetual visibility as a form of power and domination—without the need for locks and chains. It extends beyond prisons to all hierarchical institutions such as schools, hospital, the army, and so on.
p. 2↵Our privacy is increasingly under siege, though the concept of ‘privacy’ extends beyond these sorts of intrusions whose principal pursuit is personal information. It is invoked to include a multiplicity of incursions into the private domain—especially by the government—captured in the celebrated slogan, ‘the right to be let alone’. This phrase (employed by Warren and Brandeis in their seminal 1890 Harvard Law Review article) is redolent of the no less famous 17th century declaration by Sir Edward Coke that ‘a man’s house is his castle’, and has come to embrace an extensive assortment of invasions that encroach not only upon ‘spatial’ and ‘locational’ privacy, but also interfere with ‘decisional’ matters often of a moral character such as abortion, contraception, and sexual preference.
However, these questions, central though they are to freedom of choice and individual autonomy, are not fundamental to the idea of privacy. In Nineteen Eighty Four, on the other hand, George Orwell’s portrayal of a terrifying surveillance state is vividly realized in his description of a simple room which Winston (mistakenly) believes is free from the eyes of Big Brother:
It seemed to him that he knew exactly what it felt like to sit in a room like this, in an armchair beside an open fire with your feet in the fender and a kettle on the hob: utterly alone, utterly secure, with nobody watching you, no voice pursuing you, no sound except the singing of the kettle and the friendly ticking of the clock.
It is this shelter from observation that, I suggest, lies at the heart of the most cogent and the most compelling conception of privacy. The true meaning of privacy corresponds with our intuitive understanding and use of this fundamental democratic value. Privacy is, above all, a concern to protect sensitive information. When we lament its demise, we mourn the loss of control over intimate facts about ourselves. And the essence of that control is the explicit exercise of control over of our most intimate facts about us, whether they be pried upon or gratuitously published or disclosed. p. 3↵This core feature of individual privacy is captured in other similar dystopian depictions of totalitarian societies such as Aldous Huxley’s Brave New World; Philip K. Dick’s A Scanner Darkly, Eye in the Sky, and Ubik; Paul J. McAuley’s Whole Wide World; David Brin’s Earth; Charles Stross’s Glasshouse; and, most recently, The Circle by Dave Eggers. There are also countless films with similar themes of surveillance and totalitarianism such as Brazil, Conversations, Minority Report, The Truman Show, Enemy of the State, Code 46, and Gattaca.
In the case of surveillance, a moment’s reflection will reveal some of its many ironies—and difficulties. Its nature—and our reaction to it—is neither straightforward nor obvious. Is ‘Big Brother is Watching You’ a threat, a statement of fact, or merely mendacious intimidation? Does it make any difference? Is it the knowledge that I am being observed by, say, a CCTV camera, that violates my privacy? What if the camera is a (now widely available) imitation that credibly simulates the action of the genuine article: flashing light, probing lens, menacing swing? Nothing is recorded, but I am unaware of its innocence. What is my objection? Or suppose the camera is real, but faulty—and no images are made, stored, or used? My actions have not been monitored, yet subjectively my equanimity has been disturbed. The mere presence of a device that appears to be observing and recording my behaviour is surely tantamount to its being there in reality.
In other words, it is the belief that I am being watched that is my grievance. It is immaterial whether I am in fact the subject of surveillance. My objection is therefore not that I am being observed—for I am not—but the possibility that I may be (see Figure 1).
In this respect, being watched by a visible CCTV camera differs from that other indispensable instrument of the spy: the electronic listening device. When my room or office is bugged, or my telephone is tapped, I am—by definition—usually oblivious to this p. 4↵infringement of my privacy. Yet my ignorance does not, of course, render the practice inoffensive. Unlike the case of the fake or non-functioning camera, however, I have been subjected to surveillance: my private conversations have been recorded or intercepted, albeit unconsciously. The same would be true of the surreptitious interception of my correspondence: email or snail mail.
In the former case, no personal information has been obtained; in the latter, it has, but I may never know. Both practices are subsumed in the category of ‘intrusion’, yet each exhibits a distinctive apprehension. Indeed, the more one examines this (neglected) problem, the less cohesive the subject of ‘intrusion’ becomes. Each activity requires a separate analysis; each entails a discrete set of concerns, though they are united in a general anxiety that one’s society may be approaching, or already displays features of, the Orwellian horror of relentless scrutiny.
As with Bentham’s Panopticon, the question is fundamentally one of perception and its consequences. Although my conviction that I am being monitored by CCTV is based on palpable evidence, and my ignorance of the interception of my correspondence or conversations is plainly not, the discomfort is similar. In both cases, it is the distasteful recognition that one needs to adjust one’s behaviour—on the assumption that one’s words or deeds are being monitored. During the darkest years of repression in apartheid p. 5↵South Africa, for example, the telephones of anti-government activists were routinely tapped by the security services. One’s conversations were therefore conducted with circumspection and trepidation. This inevitably rendered dialogue stilted and unnatural. It is this requirement to adapt or adjust one’s behaviour in public (in the case of CCTV) or in private (on the telephone, in one’s home, or online) that is the disquieting result of a state that fails properly to regulate the exercise of surveillance.
Privacy and state surveillance
In 2010, when the first edition of this book was in press, the now famous (or infamous) international, online, journalistic NGO (non-governmental organization), Wikileaks, led by Julian Assange, began releasing a huge number of documents relating, in the main, to the wars in Afghanistan and Iraq. They included a video of an airstrike in Baghdad which had resulted in the death of several Iraqi journalists almost 80,000 previously classified documents about that war; and 779 secret files concerning prisoners detained at the Guantanamo prison camp. Towards the end of that year it released almost 400,000 secret United States (US) military logs detailing its operations in Iraq, and it collaborated with several media organizations to disclose US State Department diplomatic cables. In 2013, Bradley (now Chelsea) Manning, a 25-year-old soldier, was convicted of twenty charges in connection with the leaks, including espionage, and sentenced to thirty-five years’ imprisonment.
But its activities, though continuing, have been eclipsed by the massive revelations in June 2013 by a former US government contractor, Edward Snowden (see Figure 2). In a dramatic sequence of whistle-blowing, he revealed the enormous extent of the surveillance conducted by the National Security Agency (NSA). The first act in the drama was the disclosure by the British newspaper, The Guardian, that the NSA was collecting the telephone records of tens of millions of Americans. This was soon followed by the publication of PowerPoint slides by The Guardian and the p. 6↵Washington Post that purported to show that the NSA had direct access—via the PRISM programme—to the servers of several major tech companies, including Apple, Google, and Microsoft. The Guardian subsequently exposed the fact that these companies had worked closely with the NSA to assist them in evading encryption and other privacy controls. It disclosed, for instance, that in February 2013 alone the NSA collected almost three billion pieces of intelligence on Americans. Over the ensuing months a string of leaks emanating from Snowden were released by both newspapers divulging the massive scope of surveillance undertaken, not only by the NSA, but by security services in numerous countries. They included disclosures of spying by both the US and Britain on foreign leaders, and the storage by the NSA of domestic communications that contain foreign intelligence information; the suggestion of a crime; threats of p. 7↵serious harm to life or property; or any other evidence that could advance its electronic surveillance—including encrypted communications. Moreover, it emerged that, since December 2012, the NSA has the capacity to collect a trillion metadata records. Other leaks suggested that the US had spied on the European Union (EU) offices in New York, Washington, DC, and Brussels, as well as the embassies of France, Greece, India, Italy, Japan, Mexico, South Korea, and Turkey.
There was more. Evidence in the form of the PowerPoint slides appeared to show the existence of an NSA programme called ‘Upstream’ that collects information from the fibre optic cables that carry most Internet and phone traffic. Other slides revealed a programme comprising a network of 500 servers scattered across the world that amass almost all online activities conducted by a user, storing the information in databases searchable by name, email, IP address, region, and language. And it divulged that the private sector, including several telecom companies, provide the British security agency, GCHQ, with unrestricted access to their fibre optic cable networks, carrying a vast quantity of Internet and telephone traffic.
In addition, according to these leaks, the NSA has succeeded in cracking the encryption methods widely used by millions of individuals and organizations to protect their email, e-commerce, and financial transactions. It is also alleged that the NSA employs its colossal databases to store metadata, such as email correspondence, online searches, and the browsing history of millions of Internet users for up to a year, regardless of whether they are targets of the agency.
The Washington Post claimed that the NSA had collected bulk mobile phone location data on ordinary individuals across the globe. It apparently channels five billion mobile phone location records per day into its immense database. Complex algorithms can use these data to determine, for example, whether two people p. 8↵are together in a crowded city. The newspaper also exposed the fact that the agency obtains access to computer systems by embedding them with malware that facilitates remote access computers—even when they are unconnected to an outside network. It claimed that the US has two data centres in China specifically to insert malware on targeted Chinese computer systems.
The leaks even suggested that data are collected from smartphone apps, and that the NSA has the capacity to implant malware in millions of computers around the world, enabling it to obtain access to users’ sensitive data. The automated system employs phishing emails and false versions of popular webpages such as Facebook to infect computers (see under Malware below). Another leak alleged that the NSA collects the contents and metadata from every telephone call made in specific target countries and stores them for 30 days, including calls from US citizens who live, visit, or telephone others abroad.
The international shockwaves generated by Snowden’s exposures continue to raise searching questions about the acceptable limits of state surveillance in a democratic society. Recently, Vodafone, one of the world’s largest mobile phone groups, disturbingly revealed the existence of secret wires that enable government agencies to monitor all conversations on its networks. It claimed that they are widely used in some of the twenty-nine countries in which it operates in Europe and beyond.
The threat of terrorism cannot be taken lightly, but unless individual privacy is to be wholly extinguished, the effective oversight of security services is indispensable. In March 2014, President Obama announced that the NSA’s bulk collection of Americans’ telephone records would be terminated. He acknowledged that trust in the intelligence services had been shaken, and pledged to address the concerns of privacy advocates. Leaders of the White House intelligence committee stated that a deal would be struck with the White House to overhaul the surveillance programme.
p. 9↵The constitutionality of the NSA’s mass collection of telephone phone records has now been challenged by the American Civil Liberties Union (ACLU). The complaint contends that the dragnet under Section 215 of the Patriot Act infringes the right of privacy protected by the Fourth Amendment, as well as the First Amendment rights of free speech and association. The lawsuit seeks to terminate the agency’s mass domestic surveillance, and to require the deletion of all data collected. A federal judge denied the ACLU’s motion for a preliminary injunction, and granted the government’s motion to dismiss. The ACLU appealed this decision before the US Court of Appeals for the Second Circuit in New York.
In May 2014 the White House published a report recommending that private companies be obliged to disclose the kind of information they gather from their customers online. The report, a response to the Snowden revelations, specifically refers to the terms of service that consumers click on, invariably without reading them, when they sign up for free email accounts or download apps. The report emphasizes mosaic techniques that permit companies who ostensibly collect anonymous data from large groups of users, to identify users’ online activities. It proposes the adoption of a number of approaches including a mandatory system that would compel firms to report data breaches.
A recent development in the field of data collection and storage is the advent of so-called big data, used to describe the exponential increase and availability of data. It is characterized by what has been called the ‘three Vs’: volume, velocity, and variety. In respect of the first, the volume is a consequence of the ease of storage of transaction-based data, unstructured data streaming in from social media, and the accumulation of sensor and machine-to-machine data. Second, data are streamed at high velocity from radio frequency identification (RFID) tags, sensors, and smart metering. And, third, data assume a multiplicity of forms including structured, p. 10↵numeric data in traditional databases, from line-of-business applications, unstructured text documents, video, audio, email, and financial transactions.
Its advocates claim that it affords opportunities to correlate data in order to combat crime, prevent disease, forecast weather patterns, identify business trends, and so on. Its detractors question the reliability of its correlations, and the interpretation of the results. According to Patrick Tucker, however, ‘we have left the big data era and have entered the telemetric age … the collection and transfer of data in real time, as though sensed.’
Privacy at work
The increasing use of surveillance in the workplace is changing not only the character of that environment, but also the very nature of what we do and how we do it. The knowledge that our activities are, or even may be, monitored undermines our psychological and emotional autonomy:
Free conversation is often characterized by exaggeration, obscenity, agreeable falsehoods, and the expression of antisocial desires or views not intended to be taken seriously. The unedited quality of conversation is essential if it is to preserve its intimate, personal and informal character.
Indeed, the slide towards electronic supervision may fundamentally alter our relationships and our identity. In such a world, employees are arguably less likely to execute their duties effectively. If that occurs, the snooping employer will, in the end, secure the precise opposite of what he or she hopes to achieve.
Both landlines and mobile phones are easy prey to the eavesdropper. In the case of the former, the connection is simply a long circuit p. 11↵comprising a pair of copper wires that form a loop. The circuit carrying your conversation flows out of your home through numerous switching stations between you and the instrument on the other end. At any point a snoop can attach a new load to the circuit board, much in the way one plugs in an additional appliance into an extension cord. In the case of wiretapping, that load is a mechanism that converts the electrical circuit back into the sound of your conversation. The chief shortcoming of this primitive form of interception is that the spy needs to know when the subject is going to use the phone. He needs to be at his post to listen in.
A less inconvenient and more sophisticated method is to install a recording device on the line. Like an answering machine, it picks ups the electrical signal from the telephone line and encodes it as magnetic pulses on audiotape. The disadvantage of this method is that the intruder needs to keep the recorder running continuously in order to monitor any conversations. Few cassettes are large enough. Hence a voice-activated recorder provides a more practical alternative. But here too the tape is unlikely to endure long enough to capture the subject’s conversations.
The obvious answer is a bug that receives audio information and broadcasts it using radio waves. Bugs normally have diminutive microphones that pick up sound waves directly. The current is sent to a radio transmitter that conveys a signal that varies with the current. The spy sets up a radio receiver in the vicinity that picks up this signal and transmits it to a speaker or encodes it on a tape. A bug with a microphone is especially valuable since it will hear any conversation in the room, regardless of whether the subject is on the phone. A conventional wiretapping bug, however, can operate without its own microphone, since the telephone has one. All the wiretapper needs to do is to connect the bug anywhere along the phone line, since it receives the electrical current directly. Normally, the spy will connect the bug to the wires inside the telephone.
p. 12↵This is the classic approach. It obviates the need for the spy to revisit the site; the recording equipment may be concealed in a van that typically is parked outside the victim’s home or office.
Tapping mobile phones requires the interception of radio signals carried from and to the handsets, and converting them back into sound. The analogue mobile phones of the 1990s were susceptible to easy interception, but their contemporary digital counterparts are much less vulnerable. To read the signals, the digital computer bits need to be converted into sound—a fairly complex and expensive operation. But mobile phone calls may be intercepted at the mobile operator’s servers, or on a fixed-line section that carries encrypted voice data for wireless communication.
When you call someone on your mobile phone, your voice is digitized and sent to the nearest base station. It transmits it to another base station adjacent to the recipient’s via the mobile carrier’s switch operators. Between the base stations, transmission of voice data is effected on landlines, as occurs in the case of fixed-line phone calls. It seems that if an eavesdropper listens to such calls over the landline connection segment, mobile phones are not dissimilar to conventional phones—and are just as vulnerable.
Prior to 2011 numerous politicians and celebrities claimed that the voicemail left on their mobile telephones had been hacked into by the British newspaper, the News of the World. The newspaper’s royal editor and a private investigator were convicted of hacking messages and sentenced to imprisonment. But the practice assumed seismic proportions when, in July 2011, The Guardian reported police suspicion that the mobile telephone of murdered teenager, Milly Dowler, had been hacked by the News of the World, and that messages had been deleted to liberate space for new voicemail.
The allegations ignited widespread outrage, and, following denunciation from politicians, the newspaper was closed down by its proprietor, Rupert Murdoch, who paid Milly Dowler’s parents p. 13↵and various charities more than $4 million in compensation. The outcry generated by the scandal led to the British government establishing a judge-led enquiry into ‘the culture, practices, and ethics of the press.’
The terms of reference of the enquiry extended well beyond the specific matter of intercepted voicemail; they required its chairman, Lord Justice Leveson, to investigate a range of associated questions such as the relationship between politicians, the media, and the police, and the extent to which press self-regulation stood in need of reform (see Chapter 4).
The privacy prognosis
The future of surveillance seems daunting. It promises more sophisticated and alarming intrusions into our private lives, including the greater use of biometrics, and sense-enhanced searches such as satellite monitoring, through-the-wall surveillance (TWS) that can penetrate 30 centimetres of concrete and 15 metres beyond that into a room, laser-based molecular scanners that from a distance of 50 metres can penetrate one’s body, clothes, or luggage, and ‘smart dust’ devices—minuscule wireless micro-electromechanical sensors (MEMS) that can detect everything from light to vibrations. These so-called ‘motes’—as tiny as a grain of sand—would collect data that could be sent via two-way band radio between motes up to 300 metres away.
As cyberspace becomes an increasingly perilous domain, we learn daily of new, disquieting assaults on its citizens. This slide towards pervasive surveillance coincides with the mounting fears, expressed well before 11 September 2001, about the disconcerting capacity of the new technology to undermine our liberty. Reports of the fragility of privacy have been sounded for at least a century. But in the last decade they have assumed a more urgent form. And here lies a paradox. On the one hand, recent advances in the power of p. 14↵computers have been decried as the nemesis of whatever vestiges of our privacy that still survive. On the other, the Internet is acclaimed as a Utopia. When clichés contend, it is imprudent to expect sensible resolutions of the problems they embody, but between these two exaggerated claims, something resembling the truth probably resides. In respect of the future of privacy at least, there can be little doubt that the questions are changing before our eyes. And if, in the flat-footed domain of atoms, we have achieved only limited success in protecting individuals against the depredations of surveillance, how much better the prospects in our brave new binary world?
To take a recent example, Google has introduced ‘Google Glass’, a wearable computer that resembles ordinary spectacles. It has a miniature projector, a camera, a microphone, and a touchpad on one side of the device. Users exercise control by voice commands as well as swipes and tabs on the touchpad. With the device linked to the Internet and using Google as well as third-party applications, wearers can see information superimposed on physical scenes. This development has potentially far-reaching consequences for the security of personal information. It could be used for surveillance or to record and broadcast private conversations. If facial recognition is incorporated, anyone within sight of the Glass-wearer could be identified, with details of all online searchable personal data.
When our security is under siege, so—inevitably—is our liberty. However, a world in which our every movement is observed erodes the very freedom this snooping is often calculated to protect. We need to ensure that the social costs of the means employed to enhance security do not outweigh the benefits. Thus, one unsurprising consequence of the installation of CCTV in car parks, shopping centres, airports, and other public places is the displacement of crime: offenders simply go somewhere else. And, apart from the doors this intrusion opens to totalitarianism, a surveillance society can easily generate a climate of mistrust and suspicion; a reduction in the respect for law and those who enforce p. 15↵it; and an intensification of prosecution of offences that are susceptible to easy detection and proof.
Other developments have comprehensively altered basic features of the legal landscape. The law has been profoundly affected and challenged by countless other advances in technology. Computer fraud, identity (ID) theft, and other ‘cyber crimes’ are touched on later in this chapter.
Developments in biotechnology such as cloning, stem cell research, and genetic engineering provoke thorny ethical questions and confront traditional legal concepts. Proposals to introduce identity cards and biometrics have attracted strong objections in several jurisdictions. The nature of criminal trials has been transformed by the use of both DNA and CCTV evidence.
Orwellian supervision already appears to be alive and well in several countries. Britain, for example, boasts more than four million CCTV cameras in public places: roughly one for every fourteen inhabitants. It also possesses the world’s largest DNA database, comprising some 5.3 million DNA samples. The temptation to install CCTV cameras by both the public and private sector is not easy to resist. Data-protection law (discussed in Chapter 5) ostensibly controls its use, but such regulation has not proved especially effective. A radical solution, adopted in Denmark, is to prohibit their use, subject to certain exceptions such as in petrol stations. The law in Sweden, France, and the Netherlands is more stringent than in the United Kingdom (UK). These countries adopt a licensing system, and the law requires that warning signs be placed on the periphery of the zone monitored. German law has a similar requirement.
But we cannot ignore the impact of the private sector in its pursuit of our personal information. The divide between the state and business is disintegrating. Information is routinely shared between the two. For example, Google may provide user data when subject p. 16↵to a court order. Or even without it. It is claimed that in the US profitable arrangements exist by which the state outsources data gathering to ten major telecommunications companies, including AT&T, Verizon, and T-Mobile who reap considerable sums of money supplying law enforcement authorities with personal telecom information.
More disturbing, however, is the scale of the systematic collection of personal data by major websites such as Google, Facebook, and Amazon. And they frequently comply with government requests for consumers’ personal information:
These companies track one’s every keystroke, every order and bill payment one makes, every word and/or phrase in one’s emails, even one’s every mobile movement through GPS tracking. Data capture involves everything from your personal Social Security number, phone calls, arrest record, credit card transactions and online viewing preferences as well as your medical and insurance records and even personal prescriptions.
Smartphones and other wireless devices are two-way technologies. They contain uploaded personal data that is susceptible to misuses, and they are besieged by downloaded spam. Downloading ‘free’ apps, including games, often admits a Trojan horse (see under Malware below).
Companies such as PayPal and Visa track online transactions. Google and other agencies obtain private information by cookies and ‘click-throughs’. And so-called private data aggregators (e.g. Intelius, Lexis Nexis, and ChoicePoint) collect personal data which they sell.
We are all unique. Your fingerprint is a ‘biometric’: the measurement of biological information. Fingerprints have long been used as a p. 17↵means of linking an individual to a crime, but they also provide a practical method of privacy protection: instead of logging into your computer with a (not always safe) password, increasing use is being made of fingerprint readers as a considerably more secure entry point. We are likely to see greater use of fingerprint readers at supermarket checkouts and ATMs.
There is no perfect biometric, but the ideal is to find a unique personal attribute that is immutable or, at least, unlikely to change over time. A measurement of this characteristic is then employed as a means of identifying the individual in question. Typically, several samples of the biometric are provided by the subject; they are digitized and stored on a database. The biometric may then be used either to identify the subject by matching his or her data against that of a number of other individuals’ biometrics, or to validate the identity of a single subject.
In order to counter the threat of terrorism, we shall unquestionably witness an increased use of biometrics. This includes a number of measures of human physiography as well as DNA. Among the following examples of characteristics on which biometric technologies can be based are one’s appearance (supported by still images), e.g. descriptions used in passports, such as height, weight, colour of skin, hair, and eyes, visible physical markings, gender, race, facial hair, the wearing of glasses; natural physiography, e.g. skull measurements, teeth and skeletal injuries, thumbprint, fingerprint sets, handprints, iris and retinal scans, earlobe capillary patterns, heart-beat, hand geometry; biodynamics, e.g. the manner in which one’s signature is written, statistically analysed voice characteristics, keystroke dynamics, particularly login-ID and password; social behaviour (supported by video-film), e.g. habituated body signals, general voice characteristics, style of speech, gait, visible handicaps; imposed physical characteristics, e.g. dog-tags, collars, bracelets and anklets, barcodes, embedded microchips, and transponders.
p. 18↵Anxieties have been expressed about a facial recognition database being developed by the Federal Bureau of Investigation (FBI) that could contain 52 million images. The Electronic Frontier Foundation (EFF) has expressed concern that images of non-criminals would be stored alongside those of criminals. The database is part of the bureau’s ‘Next Generation Identification’ (NGI) programme, a large biometric database which will replace the current Integrated Automated Fingerprint Identification System (IAFIS). It is being developed to include the capture and storage of fingerprints, iris scans, and palm prints. There is a fear that biometrics may be used in a way that undermines individual privacy. Biometrics providers will thrive by selling their technology to repressive governments, and establish a foothold in relatively free countries by seeking soft targets; they may start with animals or with captive populations such as the frail, the poor, the old, prisoners, employees, and so on. A less gloomy scenario is that societies will recognize the gravity of the threat and enforce constraints on technologies and their use. This would require public support and the courage of elected representatives who will need to resist pressure from both large corporations and national security and law enforcement authorities, who invoke the bogeymen of terrorism, illegal immigration, and domestic ‘law and order’ to justify the implementation of this technology.
Our life online is increasingly vulnerable—not only to hackers and malicious software, but also to surveillance, tracking, monitoring, and profiling by both the state and businesses. The state’s primary interest is in security, while the principal objective of business is obviously the procurement of money. Each requires personal data, normally on a large scale, in order to achieve their respective ends. The result is a diminution of the user’s privacy—sometimes with devastating consequences. Surfing the web, anonymously if necessary, using a search engine, entering into online transactions, sending and receiving p. 19↵email—all these quotidian activities—are seriously impaired if we are subjected to monitoring, whether it is a website that tracks our purchases or the security service that intercepts our messages.
The artillery of malicious software (or ‘malware’) includes viruses, worms, Trojan horses, spyware, ‘phishing’, ‘bots’, ‘zombies’, and bugs. A virus is a block of code that introduces copies of itself into other programs. It normally carries a payload, which may have only nuisance value, though in many cases the consequences are serious. In order to evade early detection, viruses may delay the performance of functions other than replication. A worm generates copies of itself over networks without infecting other programs. A Trojan horse is a program that appears to carry out a positive task (and sometimes does so), but is often nasty, for instance, keystroke recorders embedded in utilities.
For example, a pernicious bug, dubbed ‘Heartbleed’—which had been undetected for years—recently exposed a major flaw in the widely used OpenSSL cryptographic (whose code was thought to be particularly secure) providing access to the memory of the systems protected by the vulnerable versions of the OpenSSL software. It compromises the secret keys used to identify the Internet service providers (ISPs), and to encrypt the traffic, the names and passwords of the users, and the actual content. The bug enables attackers to snoop on communications, steal data directly from the servers and users, and to impersonate them. It has even been alleged that the US National Security Agency (NSA) was aware of the Heartbleed bug as long ago as 2012, and may have exploited the weakness to gather intelligence.
Communications service providers (CSPs) are required, under an EU regulation of 2013, to report all data breaches to regulators within twenty-four hours, and to notify data subjects ‘without p. 20↵undue delay’ when the breach is ‘likely to adversely affect the personal data or privacy’ of that individual.
The EU’s Article 29 Working Party has since issued guidance on when data controllers should alert data subjects to a personal data breach that is likely to affect their privacy. This obligation extends beyond the advent of bugs such as Heartbleed to include loss of laptops containing medical data and financial data; web vulnerabilities exposing life assurance and medical data; unauthorized access to an ISP’s customers’ details; disclosure of credit card slips; unauthorized access to subscribers’ account data both through unauthorized disclosure and through coding errors on a website.
The guidance also provides appropriate safeguards, including encryption with a sufficiently strong and secret key; the secure storage of passwords; vulnerability scanning to diminish the risk of hacking and other breaches; regular back-ups; systems and process design to reduce the risk of breach or to mitigate its effects, such as dissociating medical information from patients’ names; limiting access, and restricting access to databases to a ‘need to know’ and ‘least privilege’ basis, including reducing access provided to vendors for system maintenance.
Spyware is software—often hidden within an email attachment—that secretly harvests data within a device about its user, or applications made by the device. These are passed on to another party. The data may include the user’s browsing history, log individual keystrokes (to obtain passwords), monitor user behaviour for consumer marketing purposes (so-called ‘adware’), or observe the use of copyrighted works. ‘Phishing’ normally takes the form of an email message that appears to emanate from a trusted institution such as a bank. It seeks to entice the addressee into divulging sensitive data such as a password or credit card details. The messages are normally highly implausible—replete with spelling mistakes and other obvious defects—yet this manifest p. 21↵deceit manages to dupe an extraordinarily high number of recipients.
Some malware filches personal data or transforms your computer into a ‘bot’—one which is remotely controlled by a third party. A ‘bot’ may be employed to collect email addresses, send spam, or mount attacks on corporate websites. Another form of attack is ‘Denial of Service’ (DoS), which uses a swarm of ‘bots’ or ‘zombies’ to inundate company websites with bogus data requests. A ‘zombie’ creates numerous processors dotted around the Internet under central or timed control (hence ‘zombies’). An attack will pursue a website until it has been taken offline. This may endure for several days, incurring considerable costs to the victim company. They are typically accompanied by demands for money.
A new threat is ‘armoured’ malware. The ‘armour’ is an added sheath of code around the virus that allows it to enter and operate unseen. The code renders the virus invisible either by appearing to be gibberish to the computer’s operating system computer or through concealing the virus by placing it elsewhere on the system. Software is now available that acts as a malware factory. It produces one unique item of malware every second. Once released, the virus is protected by built-in encryption and additional ‘armoured’ code.
Bugs are errors in software—particularly found in Microsoft Windows software—that may render the user’s system vulnerable to attack by so-called ‘crackers’. Microsoft normally responds by issuing a patch for downloading—until the next bug materializes. An ‘exploit’ is an attack on a particular vulnerability. Standard techniques are supported by established guidelines and programming code that circulate on the Internet.
The leaks by Edward Snowden, discussed earlier, exposed the prodigious scale of state surveillance conducted through the NSA’s p. 22↵PRISM and other programmes. Although the degree of surveillance conducted in EU countries does not appear to reach the scale seen in the US, there is little room for complacency. It was reported in early 2009 that police in the EU have been encouraged to expand the implementation of a rarely used power of intrusion—without warrant. This will permit police across Europe to hack into private computers when an officer believes that such a ‘remote search’ is proportionate and necessary to prevent or detect serious crime (one which attracts a prison sentence of more than three years). This could be achieved in a number of ways, including the attachment of a virus to an email message that, if opened, would covertly activate the remote search facility.
In addition, an EU directive requires telecommunications operators and ISPs to retain records of users’ calls and online activity for two years in order to facilitate security services’ checks on ‘metadata’, such as who has been communicating with whom, from where, when, and for how long. The actual content of the messages may not be read.
This directive was struck down by the European Court of Justice on the ground that it violated rights to respect for private life and the protection of personal data, adding that the use of the data without an individual’s knowledge ‘is likely to generate in the persons concerned a feeling that their private lives are the subject of constant surveillance.’ Whether this judgment survives the security concerns of many European countries remains to be seen.
In Britain a draft Communications Data Bill failed to reach the statute book, thanks to opposition by the Liberal Democrats in the House of Commons. The proposed law would require ISPs and mobile telephone companies to maintain records (but not the content) of users’ Internet browsing (including on social media), email correspondence, voice calls, Internet gaming, and mobile phone messaging services, and to store the records for twelve p. 23↵months. Retention of email and telephone contact data for this time is already required. Further legislation in this field is inevitable.
These are data that the website servers transmit to a visitor’s browser and are stored on his or her computer. They enable the website to recognize the visitor’s computer as one with which it has previously interacted, and to remember details of the earlier transaction, including search words, and the amount of time spent reading certain pages. In other words, cookie technology enables a website—by default—furtively to put its own identifier into my PC, permanently, in order track my online conduct.
And cookies can endure; they may show an extensive list of each website visited during a particular period. Moreover, the text of the cookie file may reveal personal data previously provided. Websites such as Amazon.com justify this practice by claiming that it assists and improves the shopping experience by informing customers of books which, on the basis of their browsing behaviour, they might otherwise neglect to buy. But this gives rise to the obvious danger that my identity may be misrepresented by a concentration on tangential segments of my surfing or, on the other hand, personal data harvested from a variety of sources may be assembled to create a comprehensive lifestyle profile. I return to this subject in Chapter 5.
Hackers were once regarded as innocuous ‘cyber-snoops’ who adhered to a slightly self-indulgent, but quasi-ethical, code dictating that one ought not to purloin data, but merely to report holes in the victim’s system. They were, as Lessig puts it, ‘a bit more invasive than a security guard, who checks office doors to make sure they are locked … (He) not only checked the locks but let himself in, took a quick peek around, and left a cute p. 24↵(or sarcastic) note saying, in effect, “Hey, stupid, you left your door open.” ’
While this laid-back culture eventually attracted the interest of law-enforcement authorities—who secured legislation against it—the practice continues to produce headaches. According to Simon Church of VeriSign, the online auction sites that criminals use to sell user details are merely the beginning. He anticipates that ‘mashup’ sites that combine different databases could be converted to criminal use. ‘Imagine if a hacker put together information he’d harvested from a travel company’s database with Google Maps. He could provide a tech-savvy burglar with the driving directions of how to get to your empty house the minute you go on holiday.’
The appropriation of an individual’s personal information to commit fraud or to impersonate him or her is an escalating problem costing billions of dollars a year. In 2007 a survey by the US Federal Trade Commission found that in 2005 a total of 3.7 per cent of survey participants indicated that they had been victims of identity theft. This result suggests that approximately 8.3 million Americans suffered some form of identity theft in that year, and 10 per cent of all victims reported out-of-pocket expenses of $1,200 or more. The same percentage spent at least fifty-five hours resolving their problems. The top 5 per cent of victims spent at least 130 hours. The estimate of total losses from identity theft in the 2006 survey amounted to $15.6 billion.
The practice normally involves at least three persons: the victim, the impostor, and a credit institution that establishes a new account for the impostor in the victim’s name. This may include a credit card, a utilities service, or even a mortgage.
Identity theft assumes a number of forms. Potentially the most harmful comprise credit card fraud (in which an account number p. 25↵is stolen in order to make unauthorized charges), new account fraud (where the impostor initiates an account or ‘tradeline’ in the victim’s name—the offence may be undiscovered until the victim applies for credit), identity cloning (where the impostor masquerades as the victim), and criminal identity theft (in which the impostor, masquerading as the victim, is arrested for some offence, or is fined for a violation of the law).
Part of the responsibility must be laid at the door of the financial services industry itself. Their lax security methods in granting credit and facilitating electronic payment subordinates security to convenience.
The growing use of DNA evidence in the detection of crime has generated a need for a database of samples to determine whether an individual’s profile matches that of a suspect (see Figure 3). The DNA database in England and Wales (with its 5.3 million profiles, representing 9 per cent of the population) may be the largest anywhere. It includes DNA samples and fingerprints of almost a million suspects who are never prosecuted or who are subsequently acquitted. It is hardly surprising that innocent persons should feel aggrieved by the retention of their genetic information: the potential for misuse is not a trivial matter. This dismal prospect led two such individuals to request that their profiles be expunged following their walking free. Unable to convince the English courts, they appealed to the European Court of Human Rights, which, at the end of 2008, unanimously decided that Article 8 (‘respect for private life’) had been violated.
Other jurisdictions tend to destroy a DNA profile when a suspect is acquitted. In Norway and Germany, for example, a sample may be kept permanently only with the approval of a court. In Sweden, only the profiles of convicted offenders who have served custodial sentences of more than two years may be retained. The US p. 26↵p. 27↵permits the FBI to take DNA samples on arrest, but they can be destroyed on request should no charges be laid or if the suspect is acquitted. Among the 40 or so states that have DNA databases, only California permits permanent storage of profiles of individuals charged but then cleared. It has been suggested that, to avoid discrimination against certain sectors of the population (such as black males), everybody’s DNA should be collected and held in the database. This drastic proposal is unlikely to attract general support. What is clear, however, is that to maintain the integrity of the system and protect privacy, the vulnerability of such sensitive genetic data requires stringent regulation.
The technology of radio frequency identification emerged as a means of inventory control to replace barcodes. An RFID system consists of three elements: a minuscule chip on each consumer item (an RFID tag) that stores a unique product identifier; an RFID reader; and a computer system attached to the reader having access to an inventory control database. The database contains extensive product information, including the contents, origin, and manufacturing history of the product. Assigning a tag to a product also discloses its location, rate, and place of sale, and, in the case of transport companies, its progress. It has applications in recalling faulty or dangerous merchandise, tracing stolen property, preventing counterfeit, and providing an audit trail to thwart corruption.
The potential of RFID is huge, and it is increasingly being used for ‘contactless’ payment cards, passports, and the monitoring of luggage, library books, and pets. There is no reason why humans could not be microchipped—like our dogs. It could assist the identification of Alzheimer’s patients who go astray. Combining RFID and wireless fidelity networks (Wi-Fi) could facilitate real-time tracking of objects or people inside a wireless network, such as a hospital. The privacy concern is that the p. 28↵acceptance of these benign applications may initiate less benevolent uses; there are likely to be calls for sex offenders, prisoners, illegal immigrants, and other ‘undesirables’ to be tagged (see Figure 4).
There is also the fear that if RFID data were to be aggregated with other data (for example, information stored in credit or loyalty cards)—to match product data with personal information—this could allow comprehensive personal profiles of consumers to be assembled. Moreover, an increase in the use of RFID in public places, homes, and businesses, could portend an enlargement of the surveillance society. For example, let’s say that my car has had an RFID affixed to the windscreen that automatically deducts any road tolls directly from my bank account, the fact that it has just passed through the toll station at Pisa may be useful to a party interested in my movements. There is plainly a need for sophisticated privacy-enhancing technologies (PETs) here.
Satellite signals are used by global positioning systems (GPSs) to establish a location. GPS chips are now common in vehicle navigation systems and mobile phones. It is possible to augment the data generated from a GPS by their assimilation into databases and aggregation with other information to create geographic information systems (GIS). In order to make or receive calls, mobile phones communicate their location to a base station. In effect, therefore, they broadcast the user’s location every few minutes.
Services such as Loki use wireless signals to triangulate a position, which allows a user to obtain local weather reports, find nearby restaurants, cinemas, or shops, or share their location with friends. According to its website, ‘as you travel around, MyLoki can automatically let your friends know where you are using your favourite platform—Facebook, RSS Feeds, or badges for your blog or even Twitter’. It claims to protect privacy by refraining from the collection of any personal information.
Domestic drones are increasingly being deployed in the US for surveillance. A number of American states have established legislation regulating their use. Privacy advocates have urged Congress to limit the use of drones to situations where a warrant has been acquired; in cases of an emergency; or where there are reasonable grounds to believe that the evidence collected directy relates to a specific criminal act. Images, it is argued, should then be retained only when there is reasonable suspicion that they contain evidence of a crime or where they are relevant to an ongoing investigation or trial. Recently the Governor of Washington State vetoed a bill that would have regulated the use of drones by government and police, stating that it did not go far enough to protect privacy rights. Instead, he called for a fifteen-month moratorium on purchasing or using unmanned aircraft by state p. 30↵agencies for anything other than emergencies such as forest fires. The original proposal to regulate drone use had broad, bipartisan support in the legislature, as well as the backing of the American Civil Liberties Union (ACLU), but Governor Inslee insisted, ‘I’m very concerned about the effect of this new technology on our citizens’ right to privacy. People have a desire not to see drones parked outside their kitchen window by any public agency.’
The ability to explore our genetic structure poses a number of privacy problems, not least the extent to which a doctor’s duty to preserve patient confidentiality, enshrined in the Hippocratic Oath, adequately safeguards this sensitive information against disclosure. It raises too the intractable problem of the subject’s blood relatives—and even partners and spouses—whose interest in learning the data is far from trivial.
The challenges posed by these—and other—intrusions cannot be underestimated. How have we arrived at this situation? The next chapter attempts to provide an answer.
Repelling the attacks
PETs seek to protect privacy by eliminating or reducing personal data or by preventing unnecessary or undesired processing of personal data through the use of privacy-invading technologies (PITs) without compromising the operation of the data system. Originally they took the form of ‘pseudonymization tools’: software that allows individuals to withhold their true identity from operating electronic systems, and to only reveal it when absolutely essential. These technologies help to reduce the amount of data collected about an individual. Their efficacy, however, depends largely on the integrity of those who have the power to revoke or nullify the shield of the pseudonym. Unhappily, governments cannot always be trusted.
p. 31↵Instead of pseudonymity, stronger PETs afford the tougher armour of anonymity, which prevents governments and corporations from linking data with an identified individual. This is normally achieved by a succession of intermediary-operated services. Each intermediary knows the identities of the intermediaries next to it in the chain, but has insufficient information to facilitate the identification of the previous and succeeding intermediaries. It cannot trace the communication to the originator, or forward it to the eventual recipient.
These PETs include anonymous remailers, web-surfing measures, and David Chaum’s payer-anonymous electronic cash (e-cash) or ‘Digicash’. The latter employs a blinding technique that sends randomly encrypted data to my bank, which then validates them (through the use of some sort of digital money) and returns the data to my hard disk. Only a serial number is provided: the recipient does not know (and does not need to know) the source of the payment. This process affords an even more powerful safeguard of anonymity. It has considerable potential in electronic copyright management systems (ECMS) with projects such as CITED (Copyright in Transmitted Electronic Documents) and COPICAT, being developed by the European Commission ESPIRIT programme. Full texts of copyrighted works are being downloaded and marketed without the owner’s consent or royalty being paid. These projects seek technological solutions by which users could be charged for their use of such material. This ‘tracking’ of users poses an obvious privacy danger: my reading, listening, or viewing habits may be stored, and access to them obtained, for potentially sinister or harmful purposes. Blind signatures seem to be a relatively simple means by which to anonymize users.
‘Do Not Track’ (DNT) is a technology and policy proposal that permits users to opt out of tracking by websites that they do not visit, including advertising networks and social platforms. Microsoft, Apple, Google, and Mozilla have implemented the system on their browsers, but they are neither comprehensive nor p. 32↵user-friendly. And the system is, at present, self-regulatory and unenforceable.
Anonymity is an important democratic value. Even in a pre-electronic age, it facilitated participation in the political process which an individual may otherwise wish to spurn. Indeed, the US Supreme Court has held that the First Amendment protects the right to anonymous speech. There are numerous reasons why I may wish to conceal my identity behind a pseudonym or achieve anonymity in some other way. On the Internet, I may want to be openly anonymous but conduct a conversation (with either known or anonymous identities) using an anonymous remailer. I may even wish no one to know the identity of the recipient of my email. And I may not want anyone to know to which newsgroups I belong or which websites I have visited.
There are, moreover, obvious personal and political benefits of anonymity for whistle-blowers, victims of abuse, and those requiring help of various kinds. Equally (as always?), such liberties may also shield criminal activities, though the right to anonymous speech would not extend to unlawful speech. Anonymity enjoys a unique relationship with both privacy and free speech. The opportunities for anonymity afforded by the Internet are substantial; we are probably only on the brink of discovering its potential in both spheres. It raises (somewhat disquieting) questions about the question of who we are: our very identities.
In respect of data protection (see Chapter 5) the EU’s Article 29 Working Party published an ‘Opinion’ on anonymization techniques describing various methods that data controllers use to render data anonymous. ‘Once a dataset is truly anonymized and individuals are no longer identifiable,’ the opinion points out, ‘European data-protection law no longer applies.’ The use of strong encryption to protect the security of communications has been met by resistance (notably in the US and France) and by proposals either to prohibit encryption altogether or, through p. 33↵means such as public key escrow, to preserve the power to intercept messages. The battle has been joined between law enforcers and cryptographers; it is likely to be protracted, especially since ‘enthusiastic’ is too mild a word to describe how ordinary computer users have embraced the possibility of strong encryption—Phil Zimmerman’s encryption software, PGP (‘Pretty Good Privacy’), for example, may now be generated in less than five minutes and is freely available on the Internet.
Further, Yahoo recently announced measures that include a system that encrypts all information being transmitted from one Yahoo data centre to another. The technology is designed to render email and other digital information flowing through data centres unintelligible to outsiders.
Technological solutions are especially useful in concealing the identity of the individual. Weak forms of digital identities are already widely used in the form of bank account and social security numbers. They provide only limited protection, for it is a simple matter to match them with the person they represent. The advent of smart cards that generate changing pseudo-identities will facilitate genuine transactional anonymity. ‘Blinding’ or ‘blind signatures’ and ‘digital signatures’ will significantly enhance the protection of privacy. A digital signature is a unique ‘key’ which provides, if anything, stronger authentication than an individual’s written signature. A public key system involves two keys, one public and the other private. The advantage of a public key system is that if you are able to decrypt the message, you know that it could only have been created by the sender.
The paramount question is: is my identity genuinely required for the act or transaction concerned? It is here that data-protection principles, discussed in Chapter 5, come into play.