Section 5.6. Future Research Directions


5.6. Future Research Directions

We began this chapter with a discussion of some of the reasons why considerations of trust will be important for future privacy and security systems. Let us end the chapter with some explicit considerations of the trust issues raised by future technologies. We know that researchers and developers are increasingly excited about the concept of Ambient Intelligence (AmI). This term, first coined by the Advisory Group to the European Community's Information Society Technology Programme (ISTAG), refers to the convergence of ubiquitous computing, ubiquitous communication, and interfaces that are both socially aware and capable of adapting to the needs and preferences of the user. It evokes a near future in which humans will be surrounded by "always-on," unobtrusive, interconnected intelligent objects, few of which will bear any resemblance to the computing devices of today.

One of the particular challenges of AmI, which distinguishes it from many other developments, is that the user will be involved in huge numbers of moment-to-moment exchanges of personal data without explicitly sanctioning each transaction. Today we already carry around devices (mobile phones, personal digital assistants) that exchange personal information with other devices, but we initiate most exchanges ourselves. In the future, devices embedded in the environment, and potentially in the body, will use software agents to communicate seamlessly about any number of different things: our present state of health, our preferences for what to eat, our schedule, our credentials, our destination, our need for a taxi to get us there in 10 minutes. Agent technologies will be required to manage the flow of information, and a great deal of exciting technical work is ongoing in this field. But many privacy and security concerns remain unanswered. How might we instruct these agents about when, where, and to whom certain intensely personal details can be released?

We are involved in several new research projects that address these issues, and some things have become clear:

  • User engagement in such technologies is crucial if we are to ensure a future devoid of suspicion and paranoia, but most users don't understand the complex technologies at issue here, and so new research methods inviting proper participation are required.

  • It is not enough to simply ask people about trust, privacy, or security in the abstract, because what people say and what they do are two different things.

  • Our future will be one in which many decisions are taken on our behalf by trusted third parties, so a great deal more information is required about the prerequisites for trust in regulatory bodies and agents. As we've already noted, a great deal of information is available concerning building and breaking trust in e-commerce, yet only very sparse literature is available on the ways in which people come to trust third parties in a mediated exchange. The time is right for a proper agenda for trust research with specific respect to security and privacy systems, as opposed to only e-commerce. A related issue concerns the transfer of trust from one agent to another, and recommender systems provide some interesting insights into this issue, particularly concerning the kinds of networks that support the transfer of trust from one individual to another.

  • We need to know a great deal more about what happens following loss of trust. From what we know already, it seems that loss of trust can be quite catastrophic in a one-to-one relationship, but how does it percolate throughout a network of agents, each with its own set of trust indices? Such questions will be crucial for the development of privacy and security systems that people can genuinely trust.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net