AN EXAMPLE: THE PROBLEM OF PRIVACY AND EMPLOYEE SURVEILLANCE


In order to give a convincing account of the reflective responsibility approach to privacy, it is first of all necessary to describe the problems and why it is considered a matter for consideration in a normative context. Privacy is similar to other concepts discussed so far in that it seems quite clear when one thinks about it on a superficial level, but it becomes much more hazy when one tries to define it and understand why it seems worthy of protection (cf. Weckert & Adeney, 1997, p. 76). Historically, privacy is a matter of public interest that can be traced back to the ancient Greeks (cf. Rotenberg, 1998, p. 152). Even though the concept has been part of public discourse ever since and played a role in the U.S., for example, since the founding of the state, its modern formulation only arose toward the end of the 19th century (cf. Sipior & Ward, 1995, p. 50). The first legal definition was given in a seminal article by Warren and Brandeis in the Harvard Law Review of 1890. In the article, the authors try to deduce a legal basis for privacy protection, and they argue that it is part of a broader right to be let alone. For them it is comparable with or an extension of such recognised rights as those not to be killed , assaulted, imprisoned, or harmed (Warren & Brandeis, 1890, p. 205). This definition of privacy as a right to be let alone, sometimes changed to left alone, is still prevalent today (cf. Britz, 1999, p. 16; Velasquez, 1998, p. 449). While this definition has the advantage of being easily remembered , its disadvantage is that it is too broad and says little about the concrete content of the term . As Gavison (1995, p. 334) points out, the status of privacy is generally unclear, whether it is a situation, a right, a claim, a form of control, or a value. Also, it is not obvious whether it refers to information, to autonomy, to personal identity, or to physical access. More specific definitions therefore suggest that privacy exists when an individual can control social interaction, make autonomous decisions, and controlthereleaseandcirculationofpersonalinformation(Culnan,1993,p.344). Another useful distinction is that between psychological and physical privacy ” one referring to the inner life such as thoughts, plans, beliefs, feelings, etc., and the other to the exterior physical world (Velasquez, 1998, p. 450).

What is clear, however, is that the attempts to protect privacy result from a tendency not to respect it. There are many different incentives for infringing on other people s privacy, which traditionally seem to have been curiosity or even voyeurism (cf. Gumpert & Drucker, 2000, p. 180). While these motives tended to be either of a private or political nature, the character of the threats to privacy is changing. Increasingly, the incentives to breach other people s privacy are set by economic imperatives. Another reason for the increasing importance of the topic is technology, especially information technology, which facilitates breaches of privacy on a new scale. The combination of these reasons renders the issue relevant to managers, especially managers of information technology in commercial companies which warrants lengthy discussion here.

While IT poses some obvious threats to privacy that will be discussed shortly, it is interesting to note that technology has always played a role in the development of the modern concept of privacy. In fact, the possibility of having pictures taken against one s will was one of the reasons for Warren and Brandeis development of the legal notion of privacy (Warren & Brandeis, 1890, p. 211). Information technology is generally recognised as having a great potential for breaching privacy, and therefore the concern with privacy has been part of computer ethics from the development of this discipline. There are many different ways in which IT can be used to produce information that the persons to whom it pertains might want to keep secret. The huge amount of data that is stored about everyone of us in databases, and the ease with which it can be extracted and compiled is one of the reasons (Mason, 1986, p. 7). At the same time more and more communication is transmitted by IT and can be automatically recorded and evaluated.

An image that captures this development quite well is that computers grease data (cf. Moor, 2000). The image of the greased data can help visualise how data and information, which in principle has always been around, all of a sudden acquire a new meaning. Like a machine that runs more smoothly when greased, the greased information helps find new applications for the collected information. And like grease in a machine, the information is hard to contain, hard to hold on to. Grease gets everywhere, especially if the machine runs hot, and if it gets somewhere where it is not supposed to go, it is hard to get rid of. These technical factors are one of the reasons why as long as 10 years ago many people considered the invasion of privacy their greatest fear about the misuse of computer technology (Straub & Collins, 1990, p. 150).

The new technological opportunities combine business interests to form a new dimension in the threats to privacy. Again, this development is not really new and was foreseen by Warren and Brandeis (1890, p. 195), who see recent inventions and business methods as the reason for their development of a right of privacy. However, the technological development not only facilitates new ways of collecting data, but also provides new ways of translating the data into business value. Data mining, direct marketing, automated customer checks, e-commerce, and others are examples for the business interest in data. The manifest business interests in computer-generated information and the resulting incentive to breach privacy have led to a change of paradigm with regards to the threats to privacy. While state and government used to be viewed as the classical threat to privacy, this perception has changed and one can increasingly find statements such as: In developed countries , at least in peacetime, business is a greater menace to privacy than the government tends to be (Himanen, 2001, p. 99; similar: Tavani, 2000, p. 74). This shift in the perception of threat is interesting from a social perspective, because in some ways it mirrors the political discussion between liberal individualists and communitarian collectivists about personal freedom versus social needs (cf. van den Hoven, 1999, p. 140). Furthermore it raises interesting questions about the place of business in a society and thus about the moral foundation of business (Johnson, 2001, p. 125). In order to understand how these ideas are to be judged and also to see the basis of managerial decisions with regards to privacy, it is helpful to take a look at the discourse concerning the justification of privacy. This will also lay the groundwork for the following discussion of responsibility regarding privacy.

As a general starting point, one can state that there are different paradigms used for the defence of privacy protection. These can be divided into absolute and relative positions . Proponents of the absolute approach view privacy as a non-negotiable right or obligation similar or equal to human rights such as the right to life (cf. Spinello, 1997, p. 5). The relative viewpoint sees privacy as one among many goods that is worth protecting but that has to be seen in the perspective of the other rights that also require protection. A similar differentiation is sometimes made between privacy as an intrinsic or instrumental right or value (cf. Tavani, 2000, p. 70; Moor, 2000, p. 203). Intrinsic values need no further justification, they are justified in themselves , whereas instrumental values are values only with regard to some other value that they protect or promote. Whichever view is taken, it is generally recognised that a right to privacy has limits. One can easily see that a society with an absolute right to privacy, understood as the right to control the information concerning oneself, would not function. Civic duties such as taxes, military service, and administration in general could not work in this sort of environment. In order to determine where exactly the limits of a right to privacy are, it is imperative that one understands its foundation. Here one can distinguish between an ethical defence of privacy and a legal one. Some authors see privacy as a moral hypernorm in the sense of Donaldson and Dunfee s (1999) integrated social contract theory (Milberg, Burke, Smith, & Kallman, 1995, p. 73), whereas others limit their arguments to legal and constitutional arguments (Shattuck, 1995, p. 306).

Among the answers to the question why privacy is important, one can again distinguish between two great groups: one sees privacy as important because of the effects on the individual, the other emphasises the social utility of the concept. The first group of answers tends to posit privacy as a precondition for personal development. It is viewed as necessary to become an independent individual. Privacy, or personal freedom, is the basis for self-determination, which is the basis for self-identity as we understand it in American society (Severson, 1997, p. 65). The protection of privacy can be seen as important for the development of abilities that enable the individual to function correctly within society, to develop its potential, to develop self-consciousness, etc. (cf. Rachels, 1995; Introna, 2000). At the same time a Kantian argument can be used that sees privacy protection as an expression of the recognition of a person s autonomy (Spinello, 1997, p. 5). This group of arguments that emphasises the importance of privacy for the individual gradually leads over to the utilitarian arguments that emphasise its importance for social groups (Elgesiem, 1996, p. 54). Privacy, by allowing the individual to develop to its full potential, also allows it to develop those character traits that are necessary to interact. One important fact that is frequently named is that of trust. People apparently need some private space that enables them to build a trusting relationship to others (cf. Koehn, 2001; Johnson, 2001, p. 120). A similar idea can be found in the argument that a sense of privacy is needed to feel secure, which in turn is a precondition for the development of a stable self (Brown, 2000, p. 63).

If these arguments are correct, they lead to the conclusion that privacy protection is necessary for a functioning society because it is a condition for the development of individuals who can collaborate to build such a society. Especially those forms of social organisation that are based on strong participation of the individuals will therefore be keen on the protection of privacy. Therefore one can frequently find arguments that privacy is of essential importance to democracy (Johnson, 2001; Gavison, 1995).

So far it seems as if privacy were a universally recognised right or value. Therefore the question could be: Why worry about it; if everybody agrees that it is important, where is the problem? One of the problems is that despite the principal agreement to privacy protection, there is a multitude of attempts to realise it that are not necessarily coherent . On the one hand there is the international confusion about privacy. Some countries enact a strong approach by putting the fundamental right to a person s data in the control of that person. Germany, for example, recognises a constitutional right to informational selfdetermination (Hoffmann-Riem, 2001). The U.S./American approach, on the other hand, is to emphasise the protection of the individual from the state, but other than that it only recognises a right to privacy where there is an expectation of privacy (Tavani, 2000, p. 86). In practice this means that the individual s protection from breaches of privacy is much weaker in the U.S. This is not only a matter of different national interpretation, it even threatens the transatlantic trade relations because the European Union requires equal protection of privacy from its trading partners under the threat of an end to data exchange (Langford, 1999a, p. 124; Culnan, 1993, p. 343). And these are only the differences between democratic states that stress the importance of the individual as their common basis. The problems become even worse in those areas where traditionally the individual is seen as less important vis-  -vis the community.

Another problem is posed by the different definitions of the limits of privacy. We have already seen that privacy is generally recognised as a limited right. Unlimited protection of privacy would also protect the dark forces in society (Levy, 1995, p. 652). Those limits cannot be clearly defined a priori . One will have to agree with Introna (2000, p. 190) that the appropriate protection of privacy in a concrete situation is a matter of judgment. This leads us back from the general description of privacy to our question of how the use of information technology can be managed in a responsible manner. As we have seen, the reflective use of responsibility also requires a measure of judgment and prudence. Before we come to the question of responsibility with regards to dealing with privacy and IT, however, it will be necessary to narrow down the problem a little bit more in order to be able to discuss it somewhat coherently.

Within the area of problems concerning privacy caused or aggravated by technological advances and business interests, one can distinguish between consumer privacy and employee privacy (Rogerson, 1998, p. 22). For our discussion of the application of reflective responsibility to privacy, the problem of employee surveillance seems more interesting because some of the moral features are clearer. We will therefore leave aside for the moment the question of how management should deal with customer data and concentrate on the problem of data concerning employees .

Surveillance can be defined as the possibility of being observed by other members of an organization (Beu & Buckley, 2001, p. 65). While surveillance in the morally relevant sense of the term is included in this definition, it usually stands for a very specific way of observing and being observed in the organisation, for the observation of employees by employers or superiors with the purpose of checking the employees behaviour during work and sometimes even outside of work. This practice, while again not new, has become more widespread due to the use of computers and IT. An employee s behaviour online is easily followed by looking at log files and other data that is routinely produced. Additionally, there is by now a multitude of software with the express purpose of recording and directing employees behaviour. Special software can be used to determine exactly how many keys an employee strikes or how much time she spends on the Internet and at what sites. Other software can be used to restrict the Internet access, to automatically check email messages, or to filter out certain types of attachments. Another technological means of surveillance that is increasingly used in organisations is video cameras that control the physical location and activities of employees.

While it is hard to determine exactly how many employees are subject to surveillance, which is partly caused by the lack of clear definition of the term, it is obvious that it is a substantial number. Already in 1988 the U.S. Office of Technology Assessment estimated that ten million American workers were subjects to concealed video and computer monitoring. From 1985 to 1988 the number of surveillance systems sold to business firms tripled to 70,000 (Bowie, 1999, p. 85). A survey conducted in 1996 by the Society for Human Resource Management found that 36 percent of responding companies searched employee messages regularly and 70 percent said that employers should reserve the right to do so (Schulman, 2000, p. 155). Other sources state that more than 30 million workers were subject to workplace monitoring in 2000, up from 8 million in 1991 (Hartman, 2001, p. 12). These partly contradicting numbers show that it is difficult to get reliable information about employee surveillance, but at the same time they also show that it is a widespread phenomenon that is still growing. Given the fact that there seems to be general agreement that privacy is worthy of protection and that at first sight employee surveillance constitutes a breach of privacy, one can ask why it is done at all.

There are several arguments defending employee surveillance. The most frequently named one is that employees use of ICT for non-business purposes produces huge losses for corporations and that therefore surveillance of employees is something like self-defence. There are estimates saying that U.S. corporations alone lose more than $54 billion a year because of non-work- related employee use of the Internet. Apart from the waste paid-for employee time, this misuse also uses up other scarce resources such as bandwidth and productivity (Boncella, 2001, p. 12). Another strong argument used predominantly in American arguments is the legal importance of surveillance. Given the strong litigation culture in the U.S., many companies feel obliged to check on their employee behaviour in order to avoid lawsuits on the grounds of harassment (Koehn, 2001), negligent hiring, negligent retention, or negligent supervision (Brown, 2000). Employee surveillance can even be framed in our terms of responsibility by describing it as a measure that enforces accountability. The underlying thought is that people who have nothing to hide have nothing to fear from being surveyed. In fact, few would doubt that employers have some right to know what their employees are doing and employees have a duty to disclose the truth about them (cf. Posner, 1995, p. 361).

Some of the arguments against employee surveillance are based on the fundamental reasons for the protection of privacy discussed earlier. If it is true that privacy is a necessary precondition for humans to develop those characteristics that enable them to come to their full potential and to interact in society, then employee surveillance might endanger this development. Given the transparency of the worker s life to employer inquiries, one can legitimately raise the question if the level of employer inquiry now impinges on the inner self of workers (Brown, 2000, p. 62). Another view that also relies on human nature stresses the power aspect of surveillance. Surveillance in this sense can be interpreted as a means to project power, specifically as a way to stabilise hierarchical relationships between a powerful centre and a weak periphery (Rule et al., 1995, p. 322). Also, the constant surveillance can be seen as a realisation of Bentham s Panopticon, which Foucault (1975) has used as a model for the description of power relations in society. Computerisation is the perfect tool to spread surveillance in the Panopticon and thus to be used as a power tool (Yoon, 1996). On a less abstract plane, one can also argue that surveillance is a sign of bad labour relations. Surveillance of employees can be seen to undermine trust between mangers and employees and it also indicates great scepticism about the ability of people to behave morally (Bowie, 1999, p. 85; cf. Weisband & Reinig, 1995, p. 44). Furthermore, it can be argued that surveillance undermines the aim of instituting accountability. Accountability requires autonomy and trust. If a sense of privacy is necessary to develop these properties, then accountability is based on some measure of privacy (Introna, 2000, p. 195).

Why Assume Responsibility Concerning Privacy?

The description of privacy, its justifications and limits, was supposed to serve as a background for an exemplary analysis of the role that reflective responsibility can play in IS. What should have become clear is that privacy is a complex social problem that touches on moral and ethical questions; that affects people, groups, and organisations; that has a legal as well as a moral side; and that clearly does not lend itself to simple solutions. In this sense privacy can serve as an example for many other problems resulting from the use of ICT in business. It is therefore going to be used as an example to demonstrate the application of the theory of reflective responsibility developed so far. Before we come to the advantages and the application of reflective responsibility, however, it makes sense to briefly look at the alternatives. The question now is: How else can one react to the challenges of IS, what else could or should be done apart from trying to act responsible in a reflective way? This question comprises the question why one should act responsibly at all. As was mentioned earlier on, there is no final answer to this question. If people are not interested in acting morally, it is probably a doomed enterprise to try and force them to do so by using normative theories . However, there are reasons why even from a non-moral point of view it seems advantageous to act responsibly. We will try to show now that responding responsibly to privacy considerations will lead to positive results from a moral point of view, as well as from a selfinterested, economically rational point of view.

This demonstration is probably most easily realised by looking at the alternatives to responsibility. The first one would be not to react at all. In order to see what effects this would have, we must become more specific than the discussion of privacy has been so far. The refusal to be responsible requires a subject just like responsibility itself. For the sake of the argument, we will now pick a subject that can play a role in privacy matters: the individual manager, say the CIO of a business enterprise. The CIO as possible subject could simply decide to ignore the problem. This would be a possible course of action where responsibility does not seem to play a role. Furthermore, it is a course of action that is practically possible and that one can even find quite frequently in practice. The problem, however, is that even this deliberate ignoring of responsibility matters does not really help the subject evade responsibility ascriptions. The CIO who fails to take privacy matters into consideration will have to deal with ascriptions of responsibility if, due to her inaction, employees lose motivation, leave the company, or if the corporate culture suffers or other results of her inaction are seen as objects of responsibility ascription. If privacy is indeed as serious a concern as was suggested in the discussion of the term, then it seems that responsibility will be ascribed no matter whether the potential subjects realise this or not. In a sense this leads us back to the problem of doing and omitting. Failing to assume or accept responsibility is an action in itself that is again an object of responsibility ascriptions. Responsibility simply seems inevitable. Therefore the option of refusing responsibility is not really an option. Furthermore such a failure to accept responsibility would run counter to the image of managers who usually tend to take pride in being called responsible. Similar arguments could be made for other possible subjects of responsibility ascriptions in privacy protection, ranging from the government that sets legal standards to the individual technician who installs a CCTV camera. Refusal of responsibility will generally not be possible. There is no opt-out option. The example of legal responsibility shows us that ascriptions can be successful despite the subjects willingness to accept them. The conclusion is that responsibility will play a role in normative questions such as that of privacy protection.

Problems of Traditional Responsibility Concerning Privacy

However, the area of privacy protection and surveillance at the same time shows us that even when and if responsibility is accepted and ascribed, the process of ascription runs into other problems if it follows a classical model of responsibility. If we say that privacy protection is the object of responsibility, then in the classical model we would have to determine all of the dimensions, conditions, and determinants in order to build a case for the ascription. Unfortunately this seems impossible to do. It starts with the question of who or what the subject should be. In the brief example above, we just posited an individual manager, the CIO, as responsible. However, it is plain to see that there are others who might be considered the subject with the same amount of justification. One could either see the CEO as the person representing the entire organisation as the subject, or the line-manager who does surveillance, or the technician who installs it. Another argument might be that it is not in fact a single individual who is responsible, but the corporation as such because of its corporate culture, or a part of the organisation, or maybe the industry where surveillance is accepted standard. Finally, one could see the state, government or legislature, society, or international entities as subject because they set the rules or fail to regulate surveillance.

The discussion of the problems of the traditional idea of responsibility runs into similar problems caused by a lack of clarity with regards to the other dimensions as well. Who or what is going to be the authority that decides about the acceptability of an ascription and about the sanctions? Who enforces the decision and on what normative grounds is it to be taken? What type of responsibility are we talking about, what is the temporal horizon, is it ascribed reflexively or transitively? And what exactly is the object? Even in the narrowed-down area of privacy and surveillance, there are a multitude of potential objects that one might be responsible for.

Of course not all of these questions have to be addressed by everybody. In most situations the answers to many of them are predetermined by circumstances. Unfortunately in most cases this will still not be sufficient to determine a full and viable set of dimensions for a responsibility ascription. Let us go back to the example started earlier, the responsibility of the CIO for privacy protection in a specific organisation. Let us further assume that the decision in question has been boiled down to the question of whether or not to install an Internet tracking and email checking software that would allow managers to check the use of Internet resources by the employees in their department. That means that the subject, the CIO, is clear, as well as the object. One could hope that this would allow us to describe the responsibility settings of the case. However, it turns out that there is still a multitude of potential sets of responsibility that could play a role here.

First of all there is the object. While at first sight it is the decision to install the software or not to do so, there is in fact a huge number of objects that hide behind this one. The question is: What is the aim of the decision? Is it to streamline workflow, to save company resources, to exert control over employees? And what is the ultimate rationale behind these objects? Is it to improve the corporate culture, to help employees live up to their full potential, or is the final aim just to make profits? But even if profit is what management, represented by the CIO, is aiming at, how can it best be achieved? Corresponding to the many sub-objects, there are many different instances. While the CEO or the shareholder might judge the profitability of the decision, the employees would judge some of the social results, a judge might judge the conformity to legal standards, and the CIO s own conscience might judge the adherence of the decision to her own moral standards. Accordingly, we have a mix of moral, legal, role, and other responsibilities that can point to the future or the past, which can be aiming at ascription of blame, guilt, or at promotion or a bonus. At the same time these different sorts of responsibility of the CIO may lead to contradicting behaviour and lead her to a multitude of dilemmas.

What is worse, even if we accept this confusion as the given reality of responsibility, we still do not know whether these different ascriptions are possible or acceptable at all. This is where we need to come back to the conditions of responsibility. The question then is whether the CIO fulfils the conditions that traditional theories of responsibility prescribe for their subjects. While one can argue that the CIO makes the decision and therefore is part of the causal chain that leads to the different results, the opposite view might hold that the decision would have come about without the CIO as well and that she is therefore not causally responsible. One could argue that it is part of the corporate culture, of the industry standards, that the board clearly voiced its preferences and that therefore the CIO had no choice. This then leads to the problem of freedom and power, where one could argue that the CIO herself was nothing but an instrument. But even if she had the power to freely decide, we still run into the problem of limited knowledge. It is impossible for the CIO to foresee all of the results of her decision, especially if there is no temporal limit to those results that are considered objects. This makes it impossible for her even to concentrate on profit maximisation as her exclusive object of responsibility, because maximising profits next month or in ten years may require different sorts of action. Also, this traditional approach to responsibility predetermines that a discussion of the kind that was just demonstrated is artificially limited to certain types of responsibility, for example to individual subjects.

This brief review of the problems of traditional responsibility with a view to managerial responsibility for employee privacy protection was supposed to serve as the background of the following discussion of the contribution of reflective responsibility. For the rest of the chapter, we will discuss some of the relevant aspects of reflective responsibility and IS with the aim of demonstrating why the reflective turn may render the concept useful where the traditional approach fails.

In this situation there may arise a multitude of questions, and we will use the theory of reflexive responsibility to identify the most important ones and to answer them as far as possible. In order to give structure to the problems, we will go back to the dimensions of responsibility and analyse how these are affected by the reflective use of responsibility in IS. After that we will take a look at different aspects of responsibility and IS, namely at responsibility because of IT, responsibility for IT, and responsibility through IT. Finally we will leave the immediate business surrounding and ask what problems of the framework of responsibility might be relevant for managers which will lead to a brief discussion of fundamental philosophical problems arising from responsibility questions in IT. At the end of the chapter, we will summarise the content in a sort of checklist in order to give an overview and allow people faced with immediate responsibility problems to quickly identify them more easily.




Responsible Management of Information Systems
Responsible Management of Information Systems
ISBN: 1591401720
EAN: 2147483647
Year: 2004
Pages: 52
Authors: Bernd Stahl

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net