Generalizing Trust: Classes of Tasks and Classes of Agents


In our model of trust we consider the trustier (X) and the trustee (Y) as single agents, and the task (τ) as a specific task. For reasons of generality, optimization, economy, and scalability, it would be useful to apply the trust concept not only to specific tasks and to single agents, but also to classes of tasks and to classes of agents (as humans generally do). A good theory of trust should be able to understand and possibly to predict how/when an agent who trusts something/someone will therefore trust something/someone else.

In this perspective we have to cope with a set of problems (grouped in two main categories):

  • Given X' s evaluation about Y' s trustworthiness on a specific task τ, what can we say on X' s evaluation about Y's trustworthiness on a different but analogous task τ'? What would we intend for an analogous task? When the analogy works and when it does not work between τ and τ'? How is it possible to modify X' s evaluation about Y' s trustworthiness on the basis of the characteristics of the new task? How can we group tasks in a class?

  • Given X' s evaluation about Y' s trustworthiness on a specific task (or class of tasks) τ, what can we say on X' s evaluation about the trustworthiness of a different agent Z on the same task (or class of tasks) τ? Which are the agent's characteristics that transfer (or not) the evaluation to different trustees?

In general we can say that if an agent is trustworthy with respect to a specific task (or class of tasks), it means that the agent has a set of specific features (abilities and willingness) that are useful for that task (or class of tasks). If we assume (in a first approximation) that the features an agent has remain unchanged during the time, we can deduce that the agent will be trustworthy on tasks which require the same features.

More precisely, a task has a set of characterizing properties: it is on the basis of these properties that some features of an agent will be needed.

In other (more formal) terms: for each task τ, there are a minimal set of main features f W that are required for the agent (suppose Ag 1) being able to accomplish that task.

fτ{f1, ... fn}

Some of these features would be Yes-or-Not features, some others would be measurable features; so there would be a minimal threshold to exceed:

fi | (fi is measurable) and fi {f1, ..., fn} then fi >σi where σi is the minimal acceptable threshold for that feature.

Now, what can we say about the trustworthy of Ag1 on a different task τ?

Also for τ' we can indicate a minimal set of main features f W'(notice that these features are identified by the properties of the new task τ') that are:

fτ{f'1, ... f'n}

We can say that if (fτ fτ) AND (fi | (fi is measurable)) fi>σ i) then Ag1 will be also trustworthy on the task τ'.

It is quite direct to extend this argument to a class of tasks: on the basis of the relevant common properties of the tasks in a given class (to which τ' belongs), we can predict that an agent endowed with the appropriate features for τ' will be trustworthy for the whole class.

In this direction, a cognitive model of trust with its analytical power can account for the inferential generalization of trustworthiness from task to task and from agent to agent not just based on specific experience and/or learning.




L., Iivonen M. Trust in Knowledge Management Systems in Organizations2004
WarDriving: Drive, Detect, Defend, A Guide to Wireless Security
ISBN: N/A
EAN: 2147483647
Year: 2004
Pages: 143

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net