SOCIETY, RISK, AND RESPONSIBILITY


In this section we will discuss some of the buzzwords of contemporary social sciences such as risk, uncertainty, complexity, globalisation , and information society. All of them are facets of social change that made our societies what they are today. They are also the reasons for the failure of traditional morality and the ascent of responsibility.

Technology, as we have seen, has several meanings. It is linked to science, it has close relations to economic activity, and it also constitutes a fundamental culture that can be called a technical civilisation (H ffe, 1995, p. 119). This sort of civilisation in which most citizens of Western societies live has several properties that are relevant to our topic.

Technology and Risk

Our lives today seem to be riskier than ever before. We risk them when we use a car, a bicycle, a plane. There are numerous risks to our health, our well-being, and the way we organise our lives. Some of these risks are known, such as pollution, the greenhouse effect, nuclear radiation, and so forth. Others may not even be known, and we are constantly surprised by new risks that we may have been taking for a considerable time without even knowing it, such as the consumption of BSE-infected beef. This view of rising risks stands in marked contrast to the fact that we have a higher life expectation than any other group of humans before and that at least the peoples of the industrialised countries have never known a longer time of peace and prosperity than today. In order to clear up this apparent contradiction, we will have to take a look at the concept of risk and how it affects our lives.

There are several etymological sources of the term risk. On the one hand risk can be deduced from Greek riza, or root, whose basis is Arabic risc, meaning divine gift or fate. This stands for an objective mix of chances and dangers in which we always act. On the other hand, there is the Latin/Italian root risco, meaning the sailing around a cliff. Risk is then produced by humans through the attempt to avoid dangers (Hubig, 1995, p. 102). Another possible root of the term is the Persian rozik, which means daily salary, daily bread, or fate (Pietschmann, 1992, p. 192). What is important for us here is that risk denotes a non-deterministic causal relationship. The term risk is used to indicate that a consequence follows an event in probabilistic fashion, as opposed to necessity (Thompson, 1985, p. 302). The philosophical use of the word risk is relatively new, but the underlying problem of lacking certainty is very old and can be seen as a basic constituent of being human. In the history of philosophy, we find the problem discussed under other headings, notably under the word contingency. Something is contingent, simply said, if it could also be different. It could be different because it is not necessary (Makropoulos, 1997, p. 13).

In a wider sense contingency can be used to describe the world at large. Contingency is the concept that the world exists but does not have to be there. The notion of contingency denotes the ambivalent areas of indetermination in which actions as well as coincidences are realised (Moran, 2000, p. 356).

Risk is the notion that tries to transform contingency in such a way that it becomes manageable. The most common (technical) definition of risk is the product of the results of an action multiplied by their probability (Pietschmann, 1992, p. 192; Gethmann, 1996, p. 42). This may sound somewhat abstract, but in some technical circumstances both of the factors can be calculated. The fact that this definition of risk is firstly linked to the use of technology and secondly of high ethical relevance can be demonstrated by an example. During the development of the nuclear bomb in 1942, the father of the hydrogen bomb, Edward Teller, believed that there was a possibility that the first nuclear explosion might ignite the water contained in the earth s atmosphere and thereby lead to the end of the world. According to one of the participating physicists, the probability of this was estimated to be 3:1,000,000 and thus was considered to be sufficiently low to take the risk (cf. Birnbacher, 1988, p. 13). This decision was based on the objective character of risk as mentioned before. Nevertheless it is certainly open to debate whether there can be any justification for risking the existence of the world.

The criticism of the objective definition of risk is linked to both of the factors involved, to probability and damage. We distinguish between subjective and objective probability. Objective probability can be defined as the limit of the relative frequency of an occurrence, whereas subjective probability is a degree of belief (cf. Hausman & McPherson, 1996, p. 225; Gethmann, 1987, p. 1131). While objective probability can be measured, subjective probability is a different form of information. It is not knowledge, but a form of assumption about possible worlds that has been made computable (Priddat, 1996, p. 107). Even objective probability is in many cases hard to come by. An example of the problems in obtaining objective probability is nuclear power plants. The operators of such plants tend to tell the public that they are safe, while their opponents doubt the underlying numbers and probabilities and even doubt the fundamental possibility of calculating such probabilities (Kafka, 1994, p. 151).

The dimension of damage in the objective notion of risk is equally contentious. The problem in this case is that it is impossible to objectify results from decisions because they depend on what is taken into consideration. The range of results from any decision is infinite, and therefore the damage of any decision is also potentially infinite (Kaufmann, 1995, p. 79), which makes every decision infinitely risky.

We can conclude that there is no such thing as an objective risk because any description of risk depends on arbitrary decisions about the limits of the results taken into consideration and about the basis of probability estimation. Risk is bound to human beings mental or physical activities or their results (Banse, 1994, p. 127). What remains is a relative notion of risk. Using the term risk in a relative sense means that the speaker is describing something that has to do with contingency, with the fact that we do not know the future. It also has normative connotations . While risk may formerly have had the sound of courage and adventure today, it is often equated with the potential suicide of the human race (Beck, 1986, p. 28). In the public discourse risk is a negative notion (Holzheu, 1993, p. 265).

A question related to the objectivity of risks is that of their reality. Proponents of an objective concept of risk tend to base their arguments of the objectivity of the measure of risk on the objectivity of the existence of risks. This raises deep metaphysical question about the structure of the world that will be impossible to discuss here. However, one can state without too much fear of controversy that for the purpose of describing the connection between society, risk, and eventually responsibility, ˜real is what affects the outcome of this discussion. In this sense Beck is right when he says that risks are real when they are perceived as real by humans (Beck, 1986, p. 103). This equation of risk with the perception of risk explains why we live in a risk society (Beck, 1986), while at the same time the scientific data about life expectancy, health, etc., can be interpreted to mean that we never lived more risk-free than today. It is therefore not the dangers that threaten our existence, but the complexity and incomprehensibility that grow. Therefore, decreasing dangers do not result in an increase in security (Kaufmann, 1995, p. 87).

If risk is no objective entity but related to perception, and at the same time is not something idiosyncratic but socially relevant, then the best way to describe it may be to see it as a social construct of ascription. Technology increases the reach of human actions and their results. This not only leads to a better knowledge of causal chains but also to more complexity. The capacity of shaping social relationships increases, but at the same time complexity increases . One possible way of dealing with this in a quasi-moral fashion is the ascription of risks. A result of this development is that many things that used to be accepted as fate now move in the perimeters of human control. Dangers are thus transformed to risks (Birnbacher, 1995, p. 143). Risks, unlike dangers, are constituted by human actions and they are therefore inherently moral.

This brings us back to the problem of responsibility and information systems. We have now seen that the development of the notion of risk as a description of social settings and relationships is in large part due to science and technology. On the one hand they produce risks in the objective sense by producing new dangers such as nuclear explosions or the greenhouse effect. On the other hand they produce risks by illuminating causal chains and thereby increasing the possibility of ascriptions. What is true for technology in general is also true for information technology. It produces some qualitatively new dangers that we will talk about in later chapters. Maybe more important in the context of risks, however, is that IT allows us to handle complexity in new ways. IT stabilises social constructs such as organisations and bureaucracies and thereby opens new ways of ascription. At the same time it allows us to model on a greater scale and thus find new causalities. The entire area of climate protection and global pollution is only manageable with highly complex mathematical models that require advanced information technology. If we can say today that driving a car produces CO2 and that this is a cause of global warming, which in turn will lead to the melting of polar ice with a consequent flooding of low-lying countries such as Bangladesh, then we have an example of a risk that could only become visible through the use of IT. This kind of reasoning is morally relevant. On the grounds of the causal chain just described, I can say to any person driving a car: You are (partially) responsible for the flooding of Bangladesh. Of course this is a problematic imputation, but the fact that it is possible and has a certain kind of plausibility is due to the scientific quality of the underlying model.

A problem with the business part of IS and risk is that risk plays an important role in economics and the economy, but the perception of risk in this context is completely different from that in technology. Economic activity is based on uncertainty, and risks are the lifeblood of business. In economic contents the positive side of risks are generally accentuated. Risk is no longer the product of probability and damage, but of probability and profit. It is highest praise for a manager to say that she is willing to take risks. This, of course, conflicts with the negative perception of risk in technology. The different views of risks by business people and technicians is expressed by the famous put on your management hat, a sentence used to convince a senior technician to put aside his technical objections and think about the management side of a decision, which eventually led to the explosion of the Challenger space craft.

The same ambiguity is part of information systems, where uncertainty and contingency are interpreted differently depending on the point of view. While the technician may not believe that a certain hardware or software is developed and tested sufficiently, market constraints may force a company to sell it anyway. The fast pace of development in the IT sector thus results in a conflict of interest that apparently tends to be won by business. Proof of this can be seen in the multitude of programming errors or bugs that we find in even the most sophisticated and expensive software.

Strategies of Handling Risk

There are different ways in which one can react to risks. In this section we will look mainly at the individual strategies that will then lead us to the aggregate strategies as they can be observed in societies. The first and most obvious behaviour when faced with risk is to simply ignore it. Escape from hazard and coincidence , from contingency in our terms, is an old remedy of philosophy. According to Rorty (1994, p. 55), such different philosophies as stoic refrain from passion, Zen-Buddhist lack of will, Heidegger s dispassionateness, and physics as absolute perception of reality serve this purpose.

However, man escaped animal s determinism and he has won the freedom to make decisions. This, on the other side, makes him uncertain . He is confronted with such a mass of information that he must try to do something to overcome insecurity (von Cube, 1995, p. 103). The next strategy of dealing with risk and uncertainty is thus their elimination . Enlightenment, science, and technology can be interpreted as attempts in this direction. Unfortunately these attempts were all doomed to failure. Even in the case of objective risk, the measures taken to eliminate it have proved to be unsuccessful . Scientific risk analysis cannot eradicate ambivalence and uncertainty (Renn, 1996, p. 252). The production of more and better knowledge cannot be counted as a contribution to the elimination of uncertainty (Wiesenthal, 1990, p. 47).

The impossibility of eliminating uncertainty extends beyond the area of genuine uncertainty, that is to say that we do not know the future, and covers the more manageable area of risk in the narrower sense. Risk, defined as ˜damage x probability and caused by human action, is not eliminable either. There is no option of total security in modern societies (if there ever was) (Krawietz, 1995, p. 209f). As L ¼bbe (1998, p. 109) points out, the prohibition of producing risks for others would result in a general standstill of life. The catch is that every decision is bound to produce risk. It is impossible to decide riskfree because in any decision the alternative can be judged as a risk as well. The risk of option A is always opposed by the risk of option B, even if option B means doing nothing (Wiedemann, 1993, p. 57). Many authors therefore agree that due to the impossibility of eliminating all risks, we have to accept them. This does not mean that we may not try to minimise them or to increase our knowledge about them. However, there is no point in trying to completely eradicate risks. This is true for all decisions in general (Kleinwellfonder, 1996, p. 26) and for decisions involving technology in particular. If the inevitability of risks leads to their acceptance, then the next step is to ask which risks are acceptable, for it can certainly not be our conclusion that every risk should be taken. This is the step where ethics and morality enter the picture, and where responsibility will later find its justification. Why are risks ethically relevant and why did we spend this much space discussing them? An example might serve as an answer to this. When someone who is completely isolated from the rest of the world, for example Robinson Crusoe, were to take a risk knowingly and willingly, there is not much we can say about the moral consequences. If Robinson decides to hunt a tiger on his island or to take a swim in bad weather, then he may endanger himself, but such an action is not immoral. [5 ] The situation changes, however, as soon as Robinson is no longer alone. With the arrival of Friday, not only his daily routine changes, but also his moral duties . Since Friday may need Robinson for survival or comfort , the risks that Robinson takes now acquire a moral quality. If Robinson willingly puts his life on the line, then Friday is affected and the moral quality of the action changes. We can continue the variation of the theme and find that things may again look different in case Robinson has a family or is a village chief or the President of the USA. What is the difference between the cases and why does the moral content change? The ethical evaluation of risks has in a large part to do with the question whether the taker of the risk is at the same time the person affected by it. If this is so, as in the original Robinson example, then risk is ethically neutral. If we risk something and exclusively get the good as well as the bad rewards, then there is no need for an ethical discussion about the risk. This situation changes when the person profiting from the risk is not identical to the one having to pay the price. Again, technology is a good example for this. People who decide about development and deployment of technology in our society are rarely identical with those who operate it who are again different from those who suffer the potential consequences. Before we proceed to the question of how responsibility can address these ethical problems of risk, we should enlarge our perspective from the individual to society.

Risk is, as we have seen, no longer an individual notion, even though individual interpretations and perceptions influence it. In fact, risk is a shared experience of many, if not all humans in industrialised countries. Life under uncertainty and the resulting need for security are uniting features of modern societies (Banse, 1996, p. 19). Risk is on the one hand the form in which the future in decisions is made visible and rational (Luhmann, 1990, p. 29). On the other hand this rationalisation of risk has led to a new concept of society, the risk society. The term risk society was introduced by Beck (1986), and one of the reasons for its immediate success was the simultaneous nuclear catastrophe in Chernobyl. In effect the risk society is that social order in which nothing can be risked any more because so much is being risked already. The other side of the risk society is the insurance society. All systems have to function reliably because the dangers grow with the size of the system (Guggenberger, 1992, p. 44). The risk society is somewhat similar to the class society in that the riches accumulate in the upper classes whereas the risks have to be faced in the lower classes (Beck, 1986, p. 46). In this sense the risk society embodies the moral problems of risks because those who profit are not the same as those who pay. This brief introduction into the risk society has served as an introduction to the next point, to the connection of individuals and society in the light of the development of IT.

Society, Modernity, and IT

The risk society with its particular reaction to risk is only one aspect of the changes we find in modern societies. It is a manifestation of the changes of society in modernity. Risk and uncertainty can be seen as the most important aspects of modern societies. There has never been a cultural present that knew less about its future than ours (L ¼bbe, 1993, p. 33). This pervading theme of uncertainty is part of the definition of modernity. The term modernity is difficult to define. The word modernus was used during the late 5th century to distinguish Christian present from heathen past (Habermas, 1998c, p. 197). Ever since, it has been used to signify the intentional discontinuity between new and old. For Max Weber modernity stood for the principle of defining ends, and to produce and use adequate means to meet them (Rohbeck, 1993, p. 49). Daele sees the increase of contingency as a main feature of modern culture. By increase of contingency he means the translation of realities in possibilities, of limits to options of substance to function, of absolute values to mere preferences (Daele, 1993, p. 172). We will not try to include all possible meanings of modernity, post-modernity (cf. Lyotard, 1993), second modernity, and whatever other terms have been invented for it because the definitions are too disparate. Important for our purposes is to note that modernity is defined by a huge increase of knowledge that paradoxically leads to decreasing knowledge of the future. The central form of dealing with this uncertainty is the category of risk (cf. Bon , 1995).

Since the topic of this book is the relationship of responsibility and information systems, we have to take a brief look at the impact of modernity on the normative structures of society. One feature of the normative side of modernity is the loss of certainty. From the beginning of enlightenment and then increasingly during industrialisation, traditional moral certainties were lost. The total reliance on God (or the church ) for moral guidance became less tenable. For Max Weber this was the background of his explanation of the rise of capitalism that served as a worldly replacement of religion (cf. Beck, 1986, p. 135). It is clear that modern societies destroy old forms of social commitments. That does not mean that modern societies have to disintegrate into individualism (Priddat, 1998, p. 88), but it does mean that they become less personal and more institutional (Wieland, 1999, p. 117). The institutionalisation of ethical values and their transformation into legal values are reasons why the subjects do not experience them any more (Mai, 1996, p. 247). This is probably the reason why many, especially conservative authors speak of a loss of values. In fact values are not simply lost, but they change. The apparent change towards individualism and egotism is not a sign of decadence, but an objective necessity in the face of expanding spaces of free disposition that modern societies entail (L ¼bbe, 1993, p. 33).

One last societal development that is important for our topic is that of the information society. The term information society denotes all of the different aspects that result from technological development and the resulting problems as described above. At first sight the most important characteristics of the information society are the structural and organisational changes induced by IT. It can be defined as a form of society and economy in which the production, storage, processing, transmission, spread, and use of information and knowledge ”including growing technological possibilities and interactive communication ” play an increasing part (Mittelstra , 1997). If we look at the amount of information produced in our society, and the fact that more and more people in western societies are employed dealing with information, then we can conclude that Our society is truly an information society, our time an information age (Mason, 1986, p. 5).

It would however be simplistic to interpret the information society as nothing but a sort of organisation using certain technology. To overcome this myopic interpretation, it is helpful to ask why we collectively decided to develop our society or societies into information societies. According to Mason et al. (1995, p. 32), there are four motivations for humanity to embark on the ascent of information ; these are: (a) the pursuit of wealth and the avoidance of destitution, (b) the pursuit of security and the avoidance of fear and uncertainty, (c) the pursuit of recreation and entertainment and the avoidance of boredom and depression, and (d) the pursuit of control and order and the avoidance of chaos. We see the recurring theme of uncertainty but also some other factors of moral relevance such as wealth and power. The reason we mention all of this here is that we want to make it clear that all of the developments connected to business and information technology are not just facts that can be described and accepted. All of the individual and societal developments discussed so far, from modernity, information society, risk, and uncertainty to the driving role that economy and technology play in them, have a moral angle. All of the moral angles allow different ethical interpretations and have been analysed with different ethical theories and tools. There is one concept, however, that can be found in the works of different authors from different backgrounds that seems to be especially suitable to address the issues at hand. That concept is the notion of responsibility.

[5 ] The good Kantian would of course argue that Kant demonstrated that suicide is unethical because it is self-contradictory. The autonomous subject cannot will its own non-existence without contradicting itself. However, it is not clear whether the same argument applies to the mere chance of being killed that is part of the risks we are talking about. If we wanted to rule this out on Kantian grounds, then this would lead to the unconvincing result that the Kantian could no longer go shopping because he might be run over by a car.




Responsible Management of Information Systems
Responsible Management of Information Systems
ISBN: 1591401720
EAN: 2147483647
Year: 2004
Pages: 52
Authors: Bernd Stahl

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net