Degrees of Trust: A Principled Quantification of Trust


The idea that trust is scalable is usual (in common sense, in social sciences, in AI) (Snijders & Keren, 1996). However, since no real definition and cognitive characterisation of trust is given, the quantification of trust is quite ad hoc and arbitrary, and the introduction of this notion or predicate is semantically empty. Williamson (1985), for example, claims that "trust" is an empty and superfluous notion — used by sociologists just for rethorics — since it is simply reducible to subjective probability/risk. On the contrary, we claim that there is a strong coherence between the cognitive definition of trust, its mental ingredients and, on the one side, its value, on the other side, its social functions and its affective aspects (we will not examine here). More precisely the latter are based on the former.

Here we will ground the degree of trust of X in Y on the cognitive components of X's mental state of trust. More precisely we claim that the degree of trust is a function of the subjective certainty of the pertinent beliefs. We will use the degree of trust to formalize a rational basis for the decision of relying and betting on Y. Also, in this case, we will claim that the "quantitative" aspect of another basic ingredient is relevant: the value or importance or utility of the goal g, will obviously enter the evaluation of the risk, and will also modify the required threshold for trusting.

In sum, the quantitative dimensions of trust are based on the quantitative dimensions of its cognitive constituents.

For us trust is not an arbitrary index just with an operational importance, without a real content, but it is based on the subjective certainty of the pertinent beliefs.

Trust in Beliefs and Trust in Action and Delegation

The solution we propose is not an ad hoc solution, just to ground some degree of trust. It substantiates a general claim. Pears (1971) points out the relation between the level of confidence in a belief and the likelihood of a person taking action based on the belief: "Think of the person who makes a true statement based on adequate reasons, but does not feel confident that it is true. Obviously, he is much less likely to act on it, and, in the extreme case of lack of confidence, would not act on it" (p. 15). (We stressed the terms clearly related to theory of trust.)

"It is commonly accepted that people behave in accordance with their knowledge." (Notice that this is precisely our definition of a 'cognitive agent'! but it would be better to use "beliefs.")

"The more certain the knowledge then the more likely, more rapid and more reliable is the response. If a person strongly believes something to be correct which is, in fact, incorrect, then the performance of the tasks which rely on this erroneous belief or misinformation will likewise be in error — even though the response may be executed rapidly and with confidence" (Hunt et al., 1997).

Thus under our foundation of the degree of trust, there is a general principle:

Agents act depending on what they believe, i.e., relying on their beliefs. And they act on the basis of the degree of reliability and certainty they attribute to their beliefs. In other words, trust/ confidence in an action or plan (reasons to choose it and expectations of success) is grounded on and derives from trust-confidence in the related beliefs.

The case of trust in delegated tools or agents is just a consequence of this general principle in cognitive agents. Also beliefs are something one bets and risks on, when he decides on basing his action on them. And chosen actions too are something one bets, relies, counts on and depends upon. We trust our beliefs, we trust our actions, we trust delegated tools and agents. In an uncertain world any single action would be impossible without some form of trust (Luhmann, 1990).

  • DoTXYt = DoCX[OppY(a,g)] * DoCX[AbilityY(a)] * DoCX[WillDoY(α,g)]

where:

  • DoCX[OppY(α,g)] is the degree of credibility of X' s beliefs about Y' s opportunity of performing α to realize g;

  • DoCX[AbilityY(α)] is the degree of credibility of X' s beliefs about Y' s ability/competence to perform α;

  • DoCX[WillDoY(α,g)] is the degree of credibility of X' s beliefs about Y' s actual performance;

  • DoCX[WillDoY(α,g)] = DoCX[IntendY(α,g)] * DoCX[PersistY(α,g)] (given that Y is a cognitive agent)

To Trust or Not to Trust: Degrees of Trust and Decision to Trust

In any circumstance, an agent X endowed with a given goal, has three main choices:

  1. to try to achieve the goal by itself;

  2. to delegate the achievement of that goal to another agent Y;

  3. to do nothing (relatively to this goal), renouncing.

So we should consider the following abstract scenario (Figure 8) where we call:


Figure 8: Complete Scenario

U(X), the agent X's utility function, and specifically:

U(X)p+, the utility of the X's success performance;

U(X)p-, the utility of the X's failure performance;

U(X)d+, the utility of a successful delegation (the utility due to the success of the delegated action);

U(X)d-, the utility of a failure delegation (the damage due to the failure of the delegated action);

U(X)0, the utility of to do nothing.

  • However, for sake of brevity, we will consider a simplified scenario:

  • In the scenario given in Figure 9, in order to delegate we must have:


    Figure 9: Simplified Scenario

DoTXY *U(X)d + (1 - DoT XY)U(X)d - > DoTXXt *U(X)p + (1 - DoTXXt)

U(X)p

where DoTXYW is the self-trust of X about τ.

  • More precisely, we have: U(X)p+ = Value(g) + Cost [Performance(X t)],

  • U(X)p- = Cost [Performance(X t)] + Additional Damage for failure

  • U(X)d+ = Value(g) + Cost [Delegation(X Y t)],

  • U(X)d-= Cost [Delegation(X Y t)]+ Additional Damage for failure where it is supposed that it is possible to attribute a quantitative value (importance) to the goals and where the costs of the actions (delegation and performance) is supposed to be negative.

Then, we obtain:

(1)

where:

A = (U(X)p + - U(X)p - ) / (U(X)d + - U(X)d - )

B = (U(X)p - - U(X)d - ) / (U(X)d + - U(X)d - )

Let us consider now, the two terms A and B separately.

As for the term A, if U(X)p+ - U(X)p- > U(X)d+ - U(X)d- then A*DoTXXτ > DoTXXτ, i.e., if the difference between the utility of the success and the utility of the failure in delegation is smaller than the difference between the utility of the success and the utility of the failure in non-delegation, then (for the term A) in order to delegate

  • the trust of X in Y must be bigger than the selftrust of X (about τ).

Vice-versa, if U(X)p+ - U(X)p- < U(X)d+ - U(X)d- then A*DoTXXτ < DoTXXτ, i.e., if the difference between the utility of the success and the utility of the failure in delegation is bigger than the difference between the utility of the success and the utility of the failure in non-delegation, then (for the term A) in order to delegate

  • the trust of X in Y could be smaller than the self-trust of X (about τ).

So, it is possible also to delegate to people which I trust less than myself.

Considering now the term B, if U(X) p- - U(X) d- > 0, then a positive term is added to the A: A + B > A, i.e., if the utility of the failure in case of non-delegating is bigger than the utility of the failure in case of delegation, then - in order to delegate — the trust of X in Y about τ must be greater than in the case in which the right part of (1) is constituted by A alone.

Vice-versa, if U(X) p- - U(X) d- < 0, then A + B < A, i.e., if the utility of the failure in case of non-delegating is smaller than the utility of the failure in case of delegation, then — in order to delegate — the trust of X in Y about τ must be smaller than in the case in which the right part of (1) is constituted by just A alone.

Both for A and B there is a normalization factor (U(X)d+ - U(X)d-): the more its value increases, the more the importance of the terms is reduced.

Since DoTXYτ1, from the (1)we can obtain:

(2)

From (2) we can say that, to delegate X to Y the task τ, as the selftrust (DoTXXτ) grows, the difference between the utility of the success in delegation and the utility of the failure in the non-delegation should be reduced.

Moreover (to delegate), as the self-trust (DoTXXτ) grows, it must reduce the difference between the utility of the success and of the failure in non-delegation.

Because DoTXXτ 0, from (2) we obtain:

(3)

(consider that for definition we have U(X)p+ > U(X)p-).

In practice, for delegating, a necessary (but not sufficient) condition is that the utility of the success in delegation is greater than the utility of the failure in the non-delegation.




L., Iivonen M. Trust in Knowledge Management Systems in Organizations2004
WarDriving: Drive, Detect, Defend, A Guide to Wireless Security
ISBN: N/A
EAN: 2147483647
Year: 2004
Pages: 143

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net