9.6 Privacy

 < Free Open Study > 

In most of this book we have dealt with protecting "the system" from harm. Our "system" included companies, organizations, governments , and universities; its applications, web pages, infrastructure, computers, and networks; and all its data. "Harm" meant all the threats and threat agents we have studied. Because of the potential for serious harm, we often focused on malicious attacks, for example, by hackers. So the nature of protection has been to safeguard the system.

Now we explore another dimension to information systems security: protecting the individual, nonmalicious user. In particular, we want to investigate the privacy of sensitive data about that user. The user should be protected against the system's misuse of the private data and the system's failure to protect its users' private data against outside attack and disclosure.

In this information age, private data can have value. A new class of crime, called identity theft , occurs when one person takes on the identity of another person, perhaps creating massive debt or even perpetrating crimes in the victim's identity. With a victim's credit card details, an attacker can run up huge charges in a short time. An attacker can commit and be convicted of crimes under another name or trade on someone else's education and work experience to get a job. Sorting out who did (or didn't do) what can be a monumental task. People expect privacy for certain aspects of their private lives, such as income, taxes, criminal records, medical data, and even library reading patterns. Since much of this information is now stored electronically , privacy is an important computer security issue.

Sometimes the patterns themselves make the individual data valuable . For example, marketing agencies are eager to acquire lists of likely purchasers . What are private data worth? To most people, it largely depends on whether they are your private data or, by extension, whether you could foresee the same loss of privacy coming to you. Although difficult to assess, there certainly is a value to privacy.

In this section first we explore some of the conditions that can cause loss of privacy, and then we examine some of the controls to prevent or limit those losses.

Threats to Privacy

Many of the threats to privacy are not new. Bribing insiders, especially poorly paid ones, has worked for centuries. A break-in usually involves loss of some valuables, such as jewelry , silver, or electronics. But who can say whether the laptop computer was stolen just because it was a computer or because it contained sensitive data? And public records have been, by definition, open to the public. So loss of the privacy in those records is not new. Or is it?

Aggregation and Data Mining

In 1950 you could have gone to the government records office, recorded names of all property owners , recorded the names of all drivers, looked up military veterans , tracked birth announcements in newspapers, and bought magazine subscription lists. Plenty of data was available. A private investigator might have used sources such as these to investigate the background of a single, target individual. But it was too laborious to cross-correlate many large lists to find all veterans who owned homes , drove Chevrolets, and had children under five years old. Details on consumers, especially attributes that can distinguish potential customers, are extremely valuable to marketers.

Database management systems have made large-scale correlation possible. Not only can computers sift, sort , and correlate, there is also much more raw data on which to operate . Often, you don't realize how much information about you can be gleaned from your electronic transactions. For example, your bank, or another bank whose ATM you use, obtains your identity. A toll booth transponder system can record the time and date at which a particular transponder passes the toll booth . (Consider the possibility for the government to mail you a ticket if your transponder passes one receiver and then passes another receiver so soon after that the only way to cover the distance between the two would be to exceed the speed limit.) Credit card transactions or cell phone records demonstrate you are not in your home city. And peaks in your home's electrical usage suggest when you are home and when you are away. In a day, the ordinary person may cause twenty database records to be generated (ignoring records from Internet activity, which is a huge but separate issue.)

Poor System Security

People are the weak link in any security system, and insiders are involved in the majority of computer security incidents [CSI02, DTI02]. Whether through carelessness, poor understanding, pressure, or simple human error, insiders unintentionally expose private data. Personal details are discarded in unprotected trash, inadvertently displayed on web sites, or unknowingly stored in files on a computer (such as in a cookie or as part of a query embedded in a "favorite" URL). Add to that the malicious approaches in which workers are bribed, coerced, or tricked into compromising security.

The vulnerabilities we have studied so far involving loss of confidentiality or integrity can lead to loss of private data. One target of interest to hackers today is credit card numbers. In March 2001 the U.S. FBI said that an estimated 1 million credit card numbers had been stolen from over 40 banking and commerce sites. In August 2001 VISA reported to its member banks a list of 44,000 card numbers that had been exposed. Underground web sites and crime operations sell stolen numbers (see Sidebar 9-3 on credit card theft). Whether these numbers come from compromised web sites or electronic interceptions or some other means is unknown. But system flaws certainly account for some of the loss.

Sidebar 9-3 Playing the Credit Card Numbers Game

On 13 May 2002, The New York Times reported Internet sites offering credit card numbers at prices of $100 for 250 numbers, or $1000 for 5000. Prices fluctuate with supply and demand. Because of the worldwide reach of the Internet, these cards are sold to destinations all over the world, especially eastern Europe and Asia. The sites where cards are sold move frequently, frustrating law enforcement. The difficulty in prosecuting a citizen of one country under the laws of another makes the situation even more complex.

The card numbers are used to make purchases over the Internet or to obtain cash advances against the credit card. Although the consumer is typically not responsible for the losses, the issuing banks and card agencies, such as VISA and MasterCard, suffer losses approaching 0.25 percent for online transactions versus 0.10 percent for other kinds of transactions. These costs must be recovered in some way, being passed along to the consumer in higher interest rates or to the merchant in higher transaction fees, which ultimately affect merchants ' prices.

Credit card numbers can also be used for extortion. In 2000 an online music distributor was approached by attackers claiming to have extracted its lists of credit card numbers of customers. The attackers threatened to post the numbers publicly unless they were paid $100,000 ransom.

Government Threats

Big Brother is watching. Just as marketers use computers to correlate disparate data and to infer more about you, so also does the government. The taxing authorities would like to know about your spending and banking patterns in order to ensure that you are paying all the taxes you owe. The medical authorities would like to know who has recently traveled to areas where a particular disease may be prevalent and to track that person's health over time. Crime investigators would like to know everyone who passed near a crime scene at the time of commission in order to obtain clues and locate potential witnesses. Everyone (except, perhaps, for criminals) would like the government to have data necessary to protect citizens, prevent crimes, and enforce laws. But citizens want to limit the government's information gathering because of the risk of excesses; in the past, abuses have included taxing authorities subjecting political enemies to unwarranted tax audits or the police harassing innocent people for political purposes.

The government has legitimate reasons to collect personal information about its citizens. Citizens expect the government to safeguard the data's privacy and to not use the data for purposes other than those for which they were rightfully collected.

Computer Use

The biggest risk to individuals' privacy probably is the Internet. Although e-mail and web surfing are two activities in which we engage voluntarily, not everyone is conscious of the enormous volume of data that can be collected.

E-mail is best likened to a post card in the regular mail. From the time the card is placed in the post by the sender to the time it arrives in your mail box, many people have easy access to the card and its message. For example, the mail carrier who delivers the card and every postal worker who handles it in transit could read it. In the same way, the contents of an e-mail message are often open to view by anyone between the sender and receiver. As with the post card analogy, the main thing preventing massive loss of privacy is volume: there are too many e-mail messages for it to be feasible for any group government, private, or criminalto read all, or even a substantial proportion, of messages in transit. (See, however, Sidebar 9-4 on the U.S. Carnivore program.) But just because performing large-scale interception or interpretation en route is infeasible today, tomorrow could be different as the speed and storage capacity of computers continues to improve. The places at which privacy is at greatest risk today are the two endpoints: interception at or close to the sender or receiver can make it possible to save and scrutinize all e-mail for that chosen sender or receiver.

Web surfing offers enormous potential for data collection. The server can record which pages you have downloaded, how long you lingered on a single page before clicking to move to another, whether you returned to a starting point or abandoned a line of searching, as well as data you explicitly provided, such as name, address, or other identification. The surfer can be identified by source IP address and perhaps NIC address. The surfer can also be identified by cookies stored from previous visits . Worse, when you download supposedly free software, unwittingly you may also acquire a Trojan horse that can report back to its owner anything about your computer and data or your computing activities. Registration with a service such as the Microsoft Passport allows a personal identity to be linked to a specific machine with a unique serial number and a particular IP address.

Societal Goal: Greatest Good for the Greatest Number

An inherent tension exists between individual privacy and the rights of government to protect its citizens. Your individual privacy rights are superseded by a more compelling need for information if, for example, the government has demonstrable reason to believe you may commit a serious crime. The balance between individual privacy rights and government access swings slowly over time, like a pendulum.

Corporate Rights and Private Business

Companies are free to collect data a government cannot. For example, the courts have held that employees' e-mail messages are the property of the employer, who has the right to read and copy them. A store can operate a security camera that records all movements by all people in the store. And companies can use location-sensitive identity badges to track where employees are within a corporate facility. Although people have certain reasonable expectations of privacy in certain locations, such as rest rooms and public areas, people surrender their privacy rights to the shopkeepers to be protected against shoplifting or other harm.

Privacy for Sale

How much is privacy worth? People have interesting reactions to this question. On the one hand, people argue that privacy is one of the inherent freedoms of a free society and that they should not be forced to relinquish it lightly. But offer a consumer a small discount for using a frequent-buyer card (enabling the shopkeeper to track the buying habits of each customer), and the lines extend out the shop's door. So, we voluntarily give away or sell our privacy at many times in many ways.

Sidebar 9-4 Carnivore and Big Brother

Carnivore is a project of the U.S. FBI to monitor e-mail traffic. The project collects header data on e-mail messages, but not the body of those messages. The Carnivore device contained secret software that the FBI would install on servers of ISPs. Similar to packet sniffers for Ethernet networks, Carnivore would potentially obtain data on the sender, receiver, size , and date and time for every e-mail message it passes. If Carnivore found traffic in which the FBI was interested and had cause, the FBI would apply to a court for a search warrant; with that warrant , the FBI could begin to collect the content of e-mail messages between named senders and receivers.

The notion and justification of Carnivore are based on legal precedent for telephone communication, in which courts have ruled that without a search warrant the FBI can ask a telephone company to record the phone number called, and date, time, and length of all conversation originating from a particular phone number. To obtain the actual content of the conversations, the FBI needs to apply to a court for a wiretap order, showing the court the basis for believing the wiretap would provide evidence of illegal activity.

Skeptics are concerned about Carnivore. Because the software was held secret, critics charge there is no way to ensure that Carnivore limits its search to headers and not content. Carnivore apparently produces no audit trail, so there is no way to see what addresses the agents are tapping.

When did Carnivore appear? In 1999 EarthLink, a U.S. ISP, was served with a court order to install the EtherPeek packet sniffer on behalf of the FBI. EarthLink refused to comply because it could not ensure that EtherPeek would obtain only headers. Instead, it created and installed a sniffer of its own that it knew would provide only header data, which it agreed to provide to the FBI. In 2000 the FBI again sought to install on an EarthLink server its own software, which it then revealed was not the commercial EtherPeek package but was its own private device named Carnivore. EarthLink challenged the FBI in court and, in a sealed court record, reached an agreement with the FBI.

The FBI tried to allay fears of Carnivore by hiring an independent group of testers to verify the operation of Carnivore, but without access to its source code. The testers concluded that Carnivore appeared to provide only the header data as required. Their report also indicated that Carnivore was vulnerable to the threat of FBI agents intercepting data not covered by a court order because of its lack of audit records.

On 28 May 2002 the FBI acknowledged that its controversial system hampered an investigation into al Qaeda. Before 11 September 2001 an FBI technical agent, reviewing the data obtained by Carnivore, was concerned that it had taken e-mail messages from people not suspected of terrorism. The agent was so concerned by the excess that he deleted all the e-mail, even that properly a part of the investigation.

Think about the annoying telephone calls from telemarketers who interrupt your dinner, trying to interest you in vacation trips or cable TV or another credit card. Could it relate to your use of your frequent-buyer card at the grocery where you buy expensive prepared entrees, higher-priced wines, or large quantities of some items? Buying diapers could put you on a target list for private schools , while buying beer and pizza might attract calls for making money in your spare time.

By accepting and using a frequent-buyer card or having answered a survey on your hobbies or buying habits, you may have sold your privacy rights, and at a small price. But after having sold those rights, you have no control over how they will be used. Even if the store says it intends to use your data only for internal inventory purposes, that intention can change tomorrow without your permission. Or if the store's ownership changes, the rules about your privacy may change, too.

Sometimes selling our privacy is not so obvious. We may, knowingly or not, relinquish our privacy rights in order to obtain something. If you want to join a swim club or purchase an annual pass to an amusement park, you may have to have your photograph taken (and perhaps stored) to generate an access badge. To open a bank or credit account with telephone access you may be asked to provide an ostensibly secret identifier, such as your mother's maiden name. Here the choice of privacy is yours: you can either keep your privacy or join. Usually it is apparent what use is intended to be made of your secret details (for example, serving only as an authenticator). But you have no real control over what is actually done with your private data; for instance, a disreputable clerk could retain your mother's maiden name and impersonate you. (For another means of collecting private data, see Sidebar 9-5.)

Controls Protecting Privacy

Two facts emerge from this exploration of privacy: First, the volume of data collected or that could be collected on individuals is enormous. Collecting that data is perfectly acceptable as long as it is not being used for an illegal purpose, such as prohibited discrimination or harassment . Second, the potential to correlate and mine these data files is also enormous, limited only by the capacities of computers.

And how is personal privacy secured? Sadly, in the United States, not well. Businesses recognize a need to protect the privacy data of their customers. But finding widely acceptable authenticators is difficult. And as we have seen in Chapter 7, the social engineering attack often succeeds against computer network administrators, so why should it not also succeed against bank tellers, customer service agents, or file clerks?

Authentication

Just as with other computer applications, authentication is necessary for establishing the identity of a remote user. But how can two people who have never previously communicated and have few shared secrets authenticate?

Sidebar 9-5 Microsoft Passport

Microsoft has introduced technology it calls Passport. Ostensibly, Passport will collect a user's credentials, making access and commerce on the Internet more user friendly. A user authenticates to Passport, which then shares that authentication with member sites the user visits. Thus, a user does not need to remember different login names and authenticators (passwords) for different sites.

A second convenience, called a Passport Wallet, enables the user to register credit cards, expiration dates, billing addresses, and so forth with Passport, so that the user simply indicates which card to charge, instead of filling out several screens to place an online order.

These two services appeal to many consumers. However, some users are concerned about their privacy. Passport is not implemented as a file stored on the user's computer, like a cookie. Instead, use of Passport involves a data transfer through Microsoft.com. This design gives Microsoft access to a user's browsing and buying habits. Many consumers are wary of providing such information to a commercial third party. Might Microsoft sell to bookseller A the fact that you have recently bought books from booksellers B and C, for a specific amount or of a specific type? Moreover, even if Microsoft promises to limit distribution of this information, it has no control over the companies to which it provides the data.

A second privacy concern involves Microsoft's less-than -perfect record on developing code free from security vulnerabilities. Users worry that a major security vulnerability could expose some or all the private data stored in a Passport, especially in the Wallet.

Understandably, consumers are slow to embrace Passport technology. The single sign-in capability Passport offers is convenient , but is it worth the security risk?

The most commonly used authenticators are name, address, mother's maiden name, birth date, social security or other government identity number, account number (for a business), and preestablished PIN. Name and address are widely available, birth date can often be found from a search of the motor vehicles office (or its web site), mother's maiden name can be found from birth registry records (or its web site), and the social security or government identity number is widely used by employers and banks. Private investigators can obtain these supposedly secret data items quite easily.

The designers of authentication procedures are not creative. Wouldn't it be better to ask "I see from your account that you recently purchased airline tickets. For what airline?" or "What brand of gasoline station do you often use?"

Users are also to blame. Faced with too many PINs, people do the only things sensible : they use the same PIN for access to many places, or they write the PINs down.

As you have already learned, there are some very sound authentication techniques, including challengeresponse systems, tokens, biometrics, and one-time passwords. But these approaches seem to be too sophisticated to be widely adopted, and so remote human-to-human and human-to-computer interaction is likely to remain highly subject to spoofing and impersonation.

Anonymity

To protect their privacy, some people use anonymity. For example, whereas a credit card purchase leaves a transaction trail, a cash purchase leaves little record to follow. (Few illegal drug dealers accept credit cards, so a cash transaction protects both the dealer and the buyer.)

Anonymizers are e-mail forwarding services, often located in foreign countries , that remove identifying source information from an e-mail message before forwarding it to its destination. Onion routing, as described in Chapter 7, carries this process further with a series of anonymizers, none of which knows whether this is the first, last, or some other hop. Of course, with anonymizers, one must trust the anonymizers not to retain records.

Chaum [CHA81, CHA82, CHA85] investigated protocols by which anonymous computer transactions could be completed. And several companies, including anonymizer.com, zeroknowledge.com, and siegesoft.com, offer anonymous web access. They intercede in the traffic from a browser to a web site so that the web server cannot determine from what address an access originates. A user can reveal his or her identity, for example, when placing an online order, but to ordinary web sites the user is anonymous.

Computer Voting

Voting is a process in which citizens want anonymity. Although it is easy to achieve with paper ballots (ignoring the possibility of fingerprint tracing or secretly marked ballots), and fairly easy to achieve with machines ( assuming usage protocols preclude associating the order in which people voted with a voting log from the machine), it is more difficult with computers. Properties essential to a fair election were enumerated by Shamos [SHA93].

  • Each voter 's choices must be kept secret.

  • Each voter may vote only once and only for allowed offices.

  • The voting system must be tamperproof, and the election officials must be prevented from allowing it to be tampered with.

  • All votes must be reported accurately.

  • The voting system must be available for use throughout the election period.

  • An audit trail must be kept to detect irregularities in voting, but without disclosing how any individual voted.

These conditions are challenging in ordinary paper- and machine-based elections; they are even harder to meet in computer-based elections . Privacy of a vote is essential; in some repressive countries, voting for the wrong candidate can be fatal. But public confidence in the validity of the outcome is critical, so there is a similarly strong need to be able to validate the accuracy of the collection and reporting of votes. These two requirements are close to contradictory.

DeMillo and Merritt [DEM83] devised protocols for computerized voting. Hoffman [HOF00] studied the use of computers at polling places to implement casting of votes. Rubin [RUB00] concludes "Given the current state of insecurity of hosts and the vulnerability of the Internet to manipulation and denial-of-service attacks, there is no way that a public election of any significance involving remote electronic voting could be carried out securely." But Tony Blair, British prime minister, announced in July 2002 that in the British 2006 general election, citizens would vote in any of four ways: online (by Internet) from a work or home location, by mail, by touch-tone telephone, or at polling places using online terminals. All the counts of the elections would be done electronically. In 2002 Brazil used a computer network to automate voting in its national election (in which voting was mandatory).

Pseudonymity

Sometimes, full anonymity is not wanted. A person may want to order flower seeds but not be placed on a dozen mailing lists for gardening supplies . But the person does want to be able to place similar orders again, asking for "the same color tulips I bought last time." This situation calls for pseudonyms, unique identifiers that can be used to link records in a server's database but that cannot be used to trace back to a real identity.

The Swiss bank account was a classic example of a pseudonym. Each customer had only a number to access the account. Presumably anyone with that number could perform any transaction on the account. (Obviously there were additional protections against guessing.) While they were in use (their use was discontinued in the early 1990s because of their having been used to hold ill-gotten Nazi gains from World War II), Swiss bank accounts had an outstanding reputation for maintaining the anonymity of the depositor.

Some people register pseudonyms with e-mail providers so that they have anonymous drop boxes for e-mail. Others use pseudonyms in chat rooms or with online dating services.

Legal Controls

Laws are emerging to require reasonable protection of private data. Although of obvious benefit to consumers, these laws are opposed by marketers who want to do data mining with complex collections of data.

The strongest protection of individuals' privacy is undoubtedly the European Union (E.U.) Data Protection Act. For its part, the United States has two major privacy laws, GrammLeachBliley and HIPAA. We examine each one in turn .

E.U. Data Protection Act

The Data Protection Act of 1998 places requirements on entitiesorganizations, companies, governmentsthat collect and save data on individuals. When implemented in legislation in the 15 E.U. member nations, this act requires organizations or companies that maintain records on individuals to do the following:

  • Inform individuals of the data collected and the purpose for which it is being held.

  • Use the data for that purpose only.

  • Give individuals a right to see data about themselves and to correct errors.

  • Apply appropriate measures to ensure the privacy of those data.

The requirement to ensure the privacy of the data includes not sharing the data with entities in other countries that do not have data protection laws at least as strong as the E.U. act. Because the United States has few data protection laws, and none as strong as the E.U. requirements, this restriction puts in jeopardy the ability of multinational corporations to share even customer address lists or employees' records with their U.S. branches. Although negotiations continue on this matter and it is unlikely that a large company in Europe would be precluded from exchanging data with its U.S. counterpart , this point of contention will remain between the United States and Europe.

GrammLeachBliley

The U.S. GrammLeachBliley Act (Public Law 106-102) of 1999 covers privacy of data for customers of financial institutions. Each institution must have a privacy policy of which it informs its customers, and customers must be given the opportunity to reject any use of the data beyond the necessary business uses for which the private data were collected. The act and its implementation regulations also require financial institutions to undergo a detailed security risk assessment. Based on the results of that assessment, the institution must adopt a comprehensive "information security program" designed to protect against unauthorized access to or use of customers' nonpublic personal information.

HIPAA

In 1996, Public Law 104-191, the Health Insurance Portability and Accountability Act (HIPAA) was passed in the United States. Although the first part of the law concerned the rights of workers to maintain health insurance coverage after their employment was terminated , the second part of the law required protection of the privacy of individuals' medical records. HIPAA, and its associated implementation standards, mandate protection of "individually identifiable healthcare information," that is, medical data that can be associated with an identifiable individual. Health care providers must perform standard security practices to protect the privacy of individuals' health care data, such as the following:

  • Enforce need to know.

  • Ensure minimum necessary disclosure.

  • Designate a privacy officer.

  • Document information security practices.

  • Track disclosures of information.

  • Develop a method for patients ' inspection and copying of their information.

  • Train staff at least every three years.

Perhaps most far-reaching is the requirement for health care organizations to develop "business associate contracts," which are coordinated agreements on how data shared between entities will be protected. This requirement could affect the sharing and transmittal of patient information among doctors , clinics, laboratories, hospitals , insurers, and any other organizations that handle such data.

 < Free Open Study > 


Security in Computing
Security in Computing, 4th Edition
ISBN: 0132390779
EAN: 2147483647
Year: 2002
Pages: 129

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net