Privacy-Enhancing Technologies for ADL

 < Day Day Up > 



Policy-Based Privacy and Trust Management

Policy-based management approaches have been used effectively to manage and control large distributed systems. In such a system, policies are usually expressed in terms of authorization, obligation, delegation, or refrain imperatives over subject, object, and actions. These policies are expressed using a policy specification language, such as Ponder or XACL, introduced in the next section.

While policies expressed using Ponder or XACL can be compiled and enforced in the system, other policy languages can be used simply to inform the user about the practices adopted by the system. These policies depend on other mechanisms for implementation and enforcement. An example of such a policy language is the Platform for Privacy Preferences Project (P3P) (P3P: The Platform for Privacy Preferences Project, 2001), developed by the World Wide Web Consortium (W3C). Subsection “P3P” elaborates more on the P3P and its use in meeting the privacy requirements.

Additionally, different system administrators might create policies, at different times and at different granularities. Naturally, conflicts can occur between policies, calling for some sort of mechanism to detect policy conflicts and to resolve them. Thus, a facility for policy specification and negotiation would be beneficial for distance-learning systems, with which the learner and distancelearning provider can identify policy conflicts and negotiate a resolution. A mechanism for policy negotiation is presented in the subsection entitled “Negotiation of Privacy Policy.”

Ponder

In a policy-based distance-learning system, the system administrator might specify some basic policies for the general operation of the system, and additional policies might be added based on the preferences of the parties. There would be sets of policies for each of the parties in the system (administrator, teacher, student) as well as for the interaction between these parties. In addition, governments and other regulatory bodies may have privacy laws or regulations (Privacy Technology Review). These may be translated into electronic policies and added to the general policies (Korba, 2002).

Ponder (Damianou et al., 2001; Dulay et al., 2001) is a declarative, object- oriented language for specifying security and management policies for distributed network management. A policy has three parts: subject, object, and action. Ponder uses the following terms as listed: subject refers to users or principals, target refers to a resource, and action refers to an operation by the subject on the target.

Using Ponder, a system administrator can define the following access control policies:

  • Authorization policy: Defining what activities a subject can perform on an object.

  • Delegation policies: Permitting an authorized subject to delegate some of his or her authorities to other subjects.

  • Information-filtering policies: Defining filters on the result of performed actions.

  • Refrain policies: Defining actions that a subject must refrain from performing, even though he or she might be allowed to do so.

In addition to policies, Ponder allows for the definition of roles with certain policies, which allows for easy management. A similar access control model and specification language is the XML Access Control Language (XACL) (Kudo & Hada, 2000) developed by the IBM Tokyo Research Library. Using XACL, a system administrator can write policies that specify who has access to XML documents. Polices can be defined with fine granularity applicable even to single elements within the document. XACL is usually combined with the Security SAML, which allows a business to issue an authentication, authorization, or attribute assertion for consumers or other businesses.

Interestingly, while a policy-based approach makes it possible to specify and manage privacy aspects of system operation, there is a challenge in implementing the actual controls within or around the objects. Consider the principle of Limiting Collection. This principle may be readily expressed as obligation policies. Unfortunately, in implementation, limiting the extent of collection of personal information is difficult, if not impossible. For instance, an organization may specify that it will collect names of students strictly for the purpose of managing record keeping during course execution. Yet it is difficult to imagine a system that would prevent collection of other information regarding the students’ behavior during course execution, or the data mining of other information sources for further information about the user for any purpose the organization chooses. Indeed, especially for the principles of Limiting Collection and Limiting Use, rather than automated means of compliance, trust and audit approaches are the most obvious recourse.

P3P

P3P enables Web sites to express their privacy policies in a standard format that can be automatically retrieved and interpreted by software acting on behalf of or under the control of a user (i.e., a user agent). P3P defines a machine- readable format (XML) for data collection practices, such as listed:

  • What information does the Web site gather and for what purpose?

  • How can the user gain access to the information related to his or her privacy?

  • How long is this information kept?

  • Is this information revealed to other companies, and if so, for what purpose?

A user usually applies the P3P exchange language (APPEL) to express preferences (rules) over the P3P policies. Based on these preferences, a user agent can make automated or semi-automated decisions regarding the acceptability of machine-readable privacy policies from P3P-enabled Web sites. This allows P3P-enabled client software or user agents to retrieve Web site privacy policies and to compare them against the user’s privacy preferences. If the user’s privacy preferences are satisfied by the privacy policy of the Web site, then the user may proceed with the service; otherwise, the user might be warned that the Web site does not conform to his or her privacy preferences.

Negotiation of Privacy Policy

In policy-based privacy and trust management, policies must reflect the wishes of the distance-learning consumer as well as the distance-learning provider. Yee and Korba (2003) described an agent-based approach for the negotiation of privacy policies between a distance-learning consumer and a distancelearning provider [Yee and Korba (2003-1) present the approach for any e- service]. They examined negotiation under certainty and uncertainty (where the offers and counteroffers are known or unknown, respectively) and proposed a scheme for resolving the uncertainty using the experience of others who have undergone similar negotiation. The choice of whom to call upon for negotiation experience is resolved through the identification of common interest and reputation.

In this work, fixed nonautonomous user agents act on behalf of the learner. Similar provider agents act on behalf of the provider. The learner and the provider each must provide negotiation input to their respective agents. These agents facilitate the negotiation process through (a) conducting timely presentation and edit of the separate privacy policies, (b) providing access to reputations and negotiation experience, and (c) carrying out communications with the other party’s agent. The decision to employ nonautonomous agents is justified by the fact that privacy is an inexact concept that depends on many factors, including culture and education level, so that it would be extremely difficult to build an autonomous agent that learners would trust to carry out privacy negotiations.

The scheme proposed for a negotiator to resolve negotiation uncertainty using the experience of others is summarized as follows:

Given: Stored negotiation experience of others (a data structure for this experience was given previously in this paper); stored reputation for the owners of the negotiation experience (a method to calculate the reputation from past transactions is given in this paper)

Perform the following steps in order:

  1. Identify which parties are reputable by asking a reputation agent for parties with reputations that exceed a predetermined threshold. Call the resulting set A.

  2. Among the parties in A, identify parties those that have the same interest as the negotiator. Call the resulting set B.

  3. Among the parties in B, identify parties that have negotiated the same item as the negotiator is currently negotiating. Call the resulting set C.

  4. Retrieve the negotiation experience of parties in C, corresponding to the item under negotiation. The negotiator can then use this experience (negotiation alternatives and offers) to resolve his or her uncertainty.

The authors have also implemented a working prototype of privacy policy negotiation that incorporates the above scheme for negotiating in uncertainty.

Trust Mechanisms for ADL

It is easy to imagine that students and teachers, whether young or old, will thrive in a distance-learning environment that provides mutual trust, respect, freedom, as well as individual respect. Trust will be the crucial factor for the success of distance learning. In the following, we investigate what mechanisms can be used to create the trusted interaction between the learner and the provider based on their underlying requirements.

Trust is also an important topic in information security research. It has absorbed many researchers, and series of papers have been published in recent years (Mass, 2001). The most common trust mechanisms are related to digital certificate-based approaches and policy-based trust management systems.

Digital Certificate-Based Mechanisms

Digital certificate-based mechanisms are based on the notion that “certificates represent a trusted party.” The key concept behind these mechanisms is the digital certificate. A certification authority issues a digital certificate to identify whether or not a public key truly belongs to the claimed owner. Normally, a certificate consists of a public key, the certificate information, and the digital signature of the certificate authority. The certificate information contains the user’s name and other pertinent identification data; the digital signature authenticates the user as the owner of the public key.

The most common approaches in use today are based on X.509/PKIX and PGP.

  • X.509/PKIX (PKI = Public-Key Infrastructure) defines a framework for the provision of authentication services. This is a hierarchically structured PKI and is spanned by a tree with a Root Certificate Authority (RCA). In this structure, the trust is centered at the root and then transferred hierarchically to all the users in the network via certificate authorities (CA).

  • PGP (An Open Specification for Pretty Good Privacy) presents a way to digitally sign and encrypt information “objects” without the overhead of a PKI infrastructure. In PGP, anyone can decide who to trust. UnlikeX.509/PKIX certificates, which come from a professional CA, PGP implements a mechanism called a “Web of Trust,” wherein multiple key holders sign each certificate attesting the validity of the certificate.

In an ADL system, these mechanisms are very useful in order to establish one agent’s credentials when doing transactions with another agent on the Internet. The key risk here is that one agent must have confidence and default trust on the authenticity of the public key. There are still, however, many uncertainties that challenge certificate-based mechanisms (Ellison & Schneier, 2000). For example, why and how can one agent trust a PKI vendor? There are also questions related to a vendor’s authentication rules before issuing a certificate to an agent. In practice, this kind of mechanism needs to be adjusted to offer different types of security and privacy protection depending on the application, for both the learner (agents) side and the service provider (agents) side.

Policy-Based Trust Management Systems

Besides certificate-based trust mechanisms, policy-based trust management systems have the goal of providing standard, general-purpose mechanisms for managing trust. Examples of trust management systems include KeyNote (Blaze, Feigenbaum, Ioannidis, & Keromytis, 1999) and REFEREE (Chu, 1997). Both were designed to be easily integrated into applications.

KeyNote provides a kind of unified approach to specifying and interpreting security policies, credentials, and relationships. There are five key concepts or components in this system:

  • Actions: The operations with security consequences that are to be controlled by the system

  • Principals: The entities that can be authorized to perform actions

  • Policies: The specifications of actions that principals are authorized to perform

  • Credentials: The vehicles that allow principals to delegate authorization to other principals

  • Compliance Checker: A service used to determine how an action requested by principals should be handled, given a policy and a set of credentials

REFEREE (Rule-Controlled Environment for Evaluation of Rules and Everything Else) is a trust management system for making access decisions relating to Web documents, developed by Yang-Hua Chu and based on PolicyMaker (Blaze, Feigenbaum, & Lacy, 1996). It uses PICS labels (Resnick & Miller, 1996), which specify some properties of an Internet resource, as the “prototypical credential.” It introduces the idea of “programmable credentials” to examine statements made by other credentials and fetch information from the network before making decisions.

These systems have a number of advantages on specifying and controlling authorization, especially where it is advantageous to distribute (rather than centralize) trust policy. Another advantage is that an application can simply ask the compliance checker whether a requested action should be allowed or not. Generally, these mechanisms provide more general solutions to the trust problem than public key certificate mechanisms, and they establish the trust on resource and service provision.

Building trust must be recognized as a key factor in using and developing interactions between agents, thus important for agent-based distance-learning systems and applications. The approaches and technologies we discussed provide trust decision and enforcement mechanisms for interactive distance learning with different foci and advantages. In practice, the certificate-based mechanisms and policy-based mechanisms are to be combined and tailored to provide solutions to fulfill the privacy and security requests from the learner and service provider. There is some research that has appeared in recent years in this regard; for example, Xu & Korba (2002) discussed and presented a trust model for distributed distance learning based on public key cryptography.

Secure Distributed Logs

Secure distributed logs allow a record to be kept of transactions that have taken place between a service user and a service provider. The logs are distributed by virtue of the fact that they may be stored by different applications operating on different computers. And, they may be stored within or managed by different agents within an ADL environment. Details of transaction including the times of their occurrences would be “logged” and the resulting record secured using cryptographic techniques, to provide assurance that any modification, deletion, or insertion would be detectable. For distance learning, the use of secure distributed logs has important implications for privacy. In fact they support the Privacy Principles of (1) Accountability, (5) Limiting Use, Disclosure, and Retention, and (10) Challenging Compliance. In the case of Principles (1) and(5), the existence of a secured record of transactions allows verification that conformance to each principle has been maintained. In the case of Principle(10), the existence of a record assists in challenging compliance by showing where compliance has wavered.

Pseudonym Systems

Pseudonym systems were introduced by Chaum in 1985 (Chaum, 1985) as a way of allowing a user to interact with multiple organizations anonymously. The primary goal of pseudonym systems is to hide the user’s identity. Of course, a good pseudonym system can also authenticate users; control abuse by intruders, users, services, or applications; provide accountability measures for users; etc.

In pseudonym systems, each organization may know a user by the different aliases, but these aliases cannot be linked to the true identity of the user, i.e., two organizations cannot easily combine their databases to build a dossier on the user. The pseudonyms are formed in such a way that the user can prove a statement known as a private credential to an organization. Private credentials indicate a relationship with another party. The user can obtain the credential from one organization using one of his pseudonyms and demonstrate possession of the credential to another organization, without revealing his first pseudonym to the second organization.

In the literature (Lysyanskaya et al., 1999; Chaum, 1986; Chaum, 1985; Private Credentials, 2000; Samuels et al., 2000; Chen, 1995), several models for pseudonym systems are proposed and developed. In these models, a certificate authority (CA) is needed only to enable a user to prove to an organization that his pseudonym actually corresponds to a public key of a real user. As well, there must be some stake in the secrecy of the corresponding secret key, such that the user can only share a private credential issued to that pseudonym by sharing his secret key. As long as the CA does not refuse service, a cheating CA cannot harm the system except by introducing some invalid users into the system.

In pseudonym systems, each user must first register with the CA, revealing the user’s true identity and public key, as well as demonstrating possession of the corresponding secret key, i.e., the user gets a public key identity certificate from the CA. After registration, the user contacts an organization, and together, they compute a pseudonym for the user. The user then may open accounts with many different organizations using different, unlinkable pseudonyms. However, all pseudonyms are related to each other, i.e., there exists an identity extractor that can compute a user’s public and secret keys.

An organization may issue a private credential to a user known by a pseudonym. A private credential may be single use or multiple use, and may also have an expiration date. Single-use private credentials are similar to e-cash, in that they can only be used once in an anonymous transaction. Some e-cash protocols protect against double spending by violating the anonymity of double spenders, but generally, they do not protect against transfer of the e-cash. A private credential should be usable only by the user to whom it was issued.

The private credential has the following properties (Private Credentials, 2000):

  • Anonymity: Anonymity is the state of being unidentifiable within a subject set, the anonymity set based on the definition of Pfitzmann et al. (2000). It serves as the base case for privacy protection.

  • Control: Full anonymity may not be beneficial to anyone, especially in the situation that at least one of the parties in a transaction has a legitimate need to verify previous contacts, the affiliation and eligibility of the other party, the authenticity of personal data of the other party, and so on.

  • Credential Sharing Implies Secret Key Sharing: The users who have valid credentials might want to help their friends to obtain whatever privileges the credential brings improperly. They could do so by revealing their secret keys to their friends such that their friends could successfully impersonate them in all regards.

  • Unlinkability of Pseudonyms: This means that within the system, these pseudonyms are no more and no less related than they are related concerning the a priori knowledge. Without unlinkability, all of an individual’s past and future transactions become traceable as soon as the individual is identified in a single one of these instances.

  • Unforgeability of Credentials: A credential may not be issued to a user without the organization’s cooperation.

  • Selective Disclosure: The holder of private credentials can show the private credentials’ attributes without revealing any other information about the private credentials.

  • Re-Issuance: This means that the CA can refresh a previously issued private credential without knowing the attributes it contains. The attributes can even be updated before the private credential is recertified.

  • Dossier-Resistance: A private credential can be presented to an organization in such a way that the organization is left with no mathematical evidence of the transaction. This is like waving a passport when passing customs. Alternatively, a private credential can be shown in such a way that the verifier is left with self-authenticating evidence of a message or a part of the disclosed property.

  • Pseudonym as a Public Key for Signatures and Encryption: Addition- ally, there is an optional feature of a pseudonym system: the ability to sign with one’s pseudonym, as well as encrypt and decrypt messages.

Privacy protection requires that each individual have the power to decide how his or her personal data are collected and used, how they are modified, and to what extent the data can be linked; only in this way can individuals remain in control over their personal data. When using private credentials, organizations cannot learn more about a private credential holder than what he or she voluntarily discloses, even if they conspire and have access to unlimited computing resources. Individuals can ensure the validity, timeliness, and relevance of their data.

Private credentials are beneficial in any authentication-based environment in which there is no strict need to identify individuals at each and every occasion. Private credentials do more than protect privacy: they minimize the risk of identity fraud. More generally, private credentials are not complementary to identity certificates but encompass them as a special case. Thus, pseudonym systems can subsume systems based on identity certificates.

Pseudonym systems are very useful, especially in electronic commerce environments, including agent-supported distributed-learning environments. The reason is that the accountability and anonymity are essential properties for fair exchange in e-commerce transactions. Clearly, anonymity is intended to hide a user’s identity, whereas accountability is intended to expose the user’s identity, thereby holding the user responsible for his or her activities. The pseudonym system is an effective solution for that. In fact, e-commerce systems implementing an effective pseudonym framework do not require further measures to meet the legislative requirements for privacy in many countries.

Pseudonym techniques can be implemented using proxies or sets of agents, etc. Actually, the Janus Personalized Web Anonymizer and the Identity Protector of the Privacy Incorporate Software Agent project (PISA, 2001; Borking et al., 1999) are all pseudonym techniques. These approaches could be applied to agent-supported distributed-learning environments to provide pseudonymity.

However, private credentials alone do not protect against wiretapping and traffic analysis. On networks such as the Internet, one can transmit from a computer that is part of a network located behind a firewall and deploy the pseudonymous services such as MIX network or Onion Routing network.

Network Privacy

The Internet is designed to allow computers to interconnect easily and to assure that network connections will be maintained even when various links may be damaged. This same versatility makes it easy to compromise data privacy in networked applications. Recently, traffic analyses have become significant threats to personal data on the Internet. For instance, networks may be sniffed for unencrypted packets, threatening the confidentiality of data. Research and development, however, have led to techniques that provide varying levels of private communication between parties. In this section, we concisely describe some of the more commonly known network privacy technologies: anonymous communication networks.

The primary goal of an anonymous communication network is to protect user communication against traffic analysis. Simon (1996) proposed a formal model for an anonymous communication network. It is assumed that parties can communicate anonymously. In the simplest of such models, parties can send individual messages to one another anonymously. A stronger assumption is that parties receiving anonymous messages can also reply to them. An intermediate model allows one or more parties to broadcast messages efficiently and thus to reply to anonymous ones without jeopardizing that anonymity. However, Simon’s model assumes that reliable, synchronous communication is possible. While this simplifying assumption may be unrealistic, it is not actually exploited in his proposed protocol. Rather, the assumption of synchrony serves to discretize time, abstracting the issue of communications delays without preventing adversaries from taking advantage of them, because messages arriving during the same time period are queued in arbitrary order, to make it appear as though any one of them might have arrived first.

Anonymous communication has been studied fairly extensively. For example, in order to enable unobservable communication between users of the Internet, Chaum (1981) introduced MIX networks in 1981. A MIX network consists of a set of MIX nodes. A MIX node is a processor that accepts a number of messages as input, changes their appearance and timing using some cryptographic transformation, and outputs a randomly permuted list of function evaluations of the input items, without revealing the relationship between input and output elements. MIXes can be used to prevent traffic analysis in roughly the following manner:

  1. The message will be sent through a series of MIX nodes, say i1, i2, …, id. The user encrypts the message with an encryption key for MIX node id, encrypts the result with the key from MIX node id-1, and so on with the remaining keys.

  2. The MIX nodes receive a certain number of these messages, which they decrypt, randomly reorder, and send to the next MIX node in the routes.

Based on Chaum’s MIX networks, Wei Dai described a theoretical architecture that would provide protection against traffic analysis based on a distributed system of anonymizing packet forwarders. The architecture is called Pipenet (Dai, 2000). Pipenet consists of a cloud of packet forwarding nodes distributed around the Internet; packets from a client would be encrypted multiple times and flow through a chain of these nodes. Pipenet is an idealized architecture and has never been built. Pipenet’s serious disadvantage is that its packet loss or delay would be extremely bad.

Like the Pipenet architecture, the Onion Routing network (Goldschlag et al., 1996, 1999) has been proposed and implemented in various forms. It provides a more mature approach for protection of user anonymity against traffic analysis. The primary goal of Onion Routing is to provide strongly private communications in real time over a public network at reasonable cost and efficiency. In Onion Routing, instead of making socket connections directly to a responding machine, initiating applications make connections through a sequence of machines called onion routers. The onion routing network allows the connection between the initiator and responder to remain anonymous. These connections are called anonymous socket connections or anonymous connections. Onion Routing builds anonymous connections within a network of onion routers, which are, roughly, real-time Chaum MIXes. While Chaum’s MIXes could store messages for an indefinite amount of time while waiting to receive an adequate number of messages to mix together, a Core Onion Router is designed to pass information in real time, which limits mixing and potentially weakens the protection. Large volumes of traffic can improve the protection of real-time MIXes. Thus, with Onion Routing, a user directs his or her applications to contact application proxies that form the entrance to the cloud of nodes. The application proxy will then send an onion packet through a string of Onion Routers in order to create a route through the cloud. The application proxy will then forward the application data along this route through the cloud, to exit on the other side, and be delivered to the responder the user wishes to connect.

The Freedom network (Back et al., 2001; Boucher et al., 2000) was an anonymity network implemented on a worldwide scale and in use as a commercial privacy service from early 1999 to October 2001. It was composed of a set of nodes called Anonymous Internet Proxies (AIP) that ran on top of the existing Internet. It not only used layers of encryption, similar to the MIX network and Onion Routing, but it also allowed users to engage in a wide variety of pseudonymous activities, such as multiple pseudonyms, hiding the users’ real IP address, e-mail anonymity, and other identifying information, etc. A key difference between the Freedom Network and Onion Routing is that the last node replaces the missing IP source address, which was removed by the sender, with a special IP address called the wormhole IP address.

As a lighter-weight alternative to MIXes, Reiter and Rubin proposed Crowds system (Reiter et al., 1998, 1999) in 1998. The goal of the Crowds system is to make browsing anonymous, so that information about either the user or what information the user retrieves is hidden from Web servers and other parties. The Crowds system can be seen as a peer-to-peer relaying network in which all participants forward messages. The approach is based on the idea of “blending into a crowd,” i.e., hiding one’s actions within the actions of many others. To execute Web transactions in this model, a user first joins a crowd of other users. The user’s initial request to a Web server is first passed to a random member of the crowd. That member can either submit the request directly to the end server or forward it to another randomly chosen member, and in the latter case, the next member independently chooses to forward or submit the request. The messages are forwarded to the final destination with probability p and to some other members with probability 1 - p. Finally, the request is submitted to the server by a random member, thus preventing the end server from identifying its true initiator. Even crowd members cannot identify the initiator of the request, because the initiator is indistinguishable from a member who simply passed on a request from another. Crowds system can prevent a Web server from learning any potentially identifying information about the user, including the user’s IP address or domain name. Crowds also can prevent Web servers from learning a variety of other information, such as the page that referred the user to its site or the user’s computing platform.

Recently, Freedman and Morris (Freedman et al., 2002) proposed a peer-to- peer anonymous network called Tarzan. In comparison with the Onion Routing and Freedom network, Tarzan uses the same basic idea to mix traffic but achieves IP-level anonymity by generic and transparent packet forwarding, and also sender anonymity like the Crowds system by its peer-to-peer architecture that removes any notion of an entry point into the anonymizing layer. In Tarzan, the system is designed to involve sequences of MIX relays chosen from a large pool of volunteer participants. All participants are equal peers—i.e., they are all potential originators as well as relayers of traffic. The packets are routed through tunnels involving sequences of Tarzan peers using MIX-style layered encryption. One of the ends of the tunnel is a Tarzan peer running a client application; another is a server-side pseudonymous network address translator to change the private address to a public address.

The above network-based approaches could be used to protect the users’ privacy against the traffic analysis attacks and satisfy the requirements of the privacy protection for network transaction in the agent-supported distributedlearning environments. One example is the anonymous communications for mobile agents (Korba et al., 2002) proposed in MATA’02. It may appear that anonymity networks are a dramatic approach to take for a distance-learning environment. Yet consider the growth in the number of companies providing and taking advantage of outsourced technical training. Companies use this training to retool their workforces for new initiatives. Competitors would gain information regarding the plans of others if they could pinpoint the key people taking courses from distance-learning providers. In this situation, it is easy to see that anonymous networking would be a value-added service that would be provided by the distance-learning service provider.

As a closing remark, threats against these anonymous communication networks have been discussed in the literature (Raymond, 2000; Song et al., 2002). Generally, it is a tough problem to achieve unconditional untraceability and anonymity for real-time services on the Internet, especially when we assume a strong attacker model.

Privacy-Enhancing Agent Architecture

Recently there has been some effort expended in the development of privacy- enhancing techniques for agent-based systems. The Privacy Incorporated Software Agent (PISA, 2001) is an EU-funded research project under the fifth EU Framework. Its goal is research and demonstration of intelligent software agent-based techniques to protect users’ privacy. Its architecture includes a model and some privacy-supporting measures described as follows.

PISA Model

In order to protect the user’s privacy, a PISA agent should have the following features:

  • The privacy-enhancing techniques (PET), mechanisms, and interfaces.

  • The legal mechanisms to protect personal data according to the Data Directive (the European Community privacy laws).

In the PISA system, the structure relationship of the privacy protection-related functions and mechanisms is illustrated in Figure 3.

click to expand
Figure 3: Structure of PISA agent

The privacy protection function of PISA consists of several privacy-enhancing technologies, e.g., pseudonyms, identity protectors, anonymous communication networks, trust mechanisms, security environments, etc. Figure 4 depicts the model of PISA in its environment.

click to expand
Figure 4: Model of PISA system

Privacy-Supporting Measures

In the PISA system, the following privacy-enhancing technologies are required for building the privacy protection into intelligent software agents:

  • Anonymity versus Pseudonymity: Anonymity provides better security, but a pseudonym is best for privacy protection and accountability. Also, the pseudonym-based business models are more attractive than anonymity-based ones.

  • Certificates: The certificate forms the basis of the security solutions, including authentication, secure communication, and data storage, etc.

  • Agent Practices Statement (APS): The APS describes the privacy policy of the agent.

  • Secure Communications: The secure communication includes authentication, integrity, and confidentiality of the communication messages.

  • Anonymous Communication: Anonymous communication provides the privacy protection against the traffic analysis, e.g., Onion Routing.

  • Human Computer Interface: Its purposes are the usage of information groups and the way the user links his or her personal data with privacy preferences.

  • Deleting and Updating Personal Data: PISA must be able to view, delete, and update its personal data upon request of the data subject.

Many technical measures may be put in place to protect user privacy. Because the user is ultimately in control of these features, measures to assure that the user understands and trusts that security and privacy measures are operating according to expectations are required.



 < Day Day Up > 



Designing Distributed Environments with Intelligent Software Agents
Designing Distributed Learning Environments with Intelligent Software Agents
ISBN: 1591405009
EAN: 2147483647
Year: 2003
Pages: 121

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net