Definition of Trust and its Cognitive Anatomy


When we trust someone, what kind of relationship are we establishing with that person? What about the minds of the interacting agents? Is trust a decision, a relationship or just a static disposition, an evaluation? What are the necessary ingredients of the mental state of trust? Are these ingredients useful for defining trust?

We do not have in cognitive and social sciences a shared or dominant, clear and convincing notion of trust. Every author working on trust frequently provides his or her own definition not really in a "general" sense, but aimed at being appropriated for a specific domain (commerce, politics, technology, etc.). Also those "general" definitions (with some cross-domain validity) are usually either incomplete or redundant. They miss or simply let implicit and give for presupposed important components, or they include something just accidental and domain specific.

Not only there is not a shared and dominant definition, but there is not even a clear model of trust as mental attitude ("I trust Mary but not enough to count on her"), as decision and action ("How could you trust Mary?!"), and as a social relationship (depending, counting on, not worrying about).

The aim of this chapter is to provide a general/abstract (domain independent) operational notion and model of trust, which moves from and specializes the common sense (natural languages) notion and the intuitive notions frequently used in the social sciences, but defines a technical scientific construct (for cognitive and social theory).

This model is based on a portrait of the mental state of trust in cognitive terms (beliefs, goals). This is not a complete account of the psychological dimensions of trust: it represents the most explicit (reason-based) and conscious form. We do not account here for the more implicit forms of trust (for example, trust by default, not based upon explicit evaluations or beliefs or derived from previous experience or other sources) or for the affective dimensions of trust, based not on explicit evaluations, but on emotional responses and an intuitive, unconscious appraisal.

Also the social and relational aspects are put aside in this chapter, because we think that the mental kernel is the basis of the rest and the first issue to be clarified.

We will specify which beliefs and which goals characterise X's trust in another agent Y about Y' s behaviour/action α relevant for a given result (goal of X) g. Given the overlap between trust and reliance/delegation, we need also to clarify their relationship.

We present in this chapter our socio-cognitive analysis of trust. We will identify the different meanings and concepts that are hidden under the word "trust" in its everyday use, but also in the notion used within the psychological and social sciences.

We will analyze in detail the cognitive ingredients of trust and we will show how these elements are the basis for various kind of delegation. Trust will be characterized by internal and external attributions, and by its rational or irrational nature. We will show how it is possible to transfer the trust evaluation from a specific task to a class of tasks or to a different agent. Finally, the reader will find some hints about the debate on the different (socio-cognitive, gametheoretical, and philosophical) approaches to the study of trust.

A deep analysis of the trust concept could be very useful for understanding the role of trust in knowledge management and systems in organizations. In fact, in organizations' trust plays a primary role and it is not only trivially linked with the notion of role. Trust is strongly supported by the organization's roles and functions, but just by understanding the interelational notions of these concepts (such as delegation and trust), it possible to really analyze how organizations work.

Different Concepts of Trust

"Trust" means different things, but they are systematically related with each other. In particular, we analyze three crucial concepts that have been recognized and distinguished not only in natural language, but in the scientific literature. Trust is at the same time:

  • A mere mental attitude (prediction and evaluation) towards another agent, a simple disposition;

  • A decision to rely upon the other, i.e., an intention to delegate and trust, which makes the trustor "vulnerable" (Mayer et al., 1995);

  • A behaviour, i.e., the intentional act of trusting, and the consequent relation between the trustor and the trustee.

Consider this example:

John is the new director of a firm; Mary and Francesca are employees, possible candidates for the role of John's secretary. Now, consider these three different aspects (and components) of John's trust toward them:

  1. John evaluates Mary on the basis of his personal knowledge — if any — of her CV and of what the others say about her; on such a basis he forms his own opinion and evaluation about Mary: how much does he consider Mary trustworthy as a secretary? Will this trust be enough for choosing Mary and enough for him to decide to bet on her?

  2. Now John has also considered Francesca's CV, reputation, etc.; he knows that there are no other candidates and decides on counting on Mary as his secretary, i.e., he has the intention of delegating to Mary this job/task when he will reorganize his staff.

  3. One week later, in fact, John nominates Mary as his secretary and uses her, thus, he actually trusts her for this job.

In situation (1) John has to formulate some kind of trust that is in fact an evaluation about Mary as a secretary. This trust might be sufficient or it might not in that situation. Since he knows that there is also another candidate, he cannot choose Mary, but he can just express a mere evaluation of her, and perhaps this evaluation is not good enough for being chosen.

In situation (2) John arrives to a decision that is based on trust and is the decision to trust. It is also possible that — in John's opinion — neither Mary nor Francesca is a suitable person as a secretary and he might make the decision of not trusting them, and of considering different hypotheses: (i) to send a new call for a secretary, or (ii) to choose the less worst between them for a brief period, looking for new candidates in parallel, or (iii) to stay a period without any secretary and so on.

In situation (3) John expresses his trust through a delegation, in fact, an interaction (or, more in general, through an observable external behaviour).

We will call these three different kinds of trust respectively: trust disposition (or core trust), decision to trust, and delegation (or act of trusting).

In fact, core trust and decision to trust are the synthesis of a complex mental state of the trustor: of a part of his beliefs and goals. In the next section we will analyse these mental ingredients in more detail (see also Figure 1).

click to expand
Figure 1: Trust Disposition, Decision to Trust, and Act of Trusting

The trust disposition is a determinant and precursor of the decision which is a determinant and a precondition of the act; however, the disposition and the intention remain as mental constituents during the next steps.

In the case of autonomous agents who have no external constraints conditioning their freedom of making reliance, we can say that a sufficient value of core trust is a necessary but not sufficient condition for a positive decision to trust, and the decision to trust is a necessary but not sufficient condition for delegation; vice versa delegation implies the decision to trust, and the decision to trust implies a sufficient value of core trust.

A Belief and Goal-Based Model of Trust

As we have just seen, the different kinds of trust are all based on the trustor's resultant cognitive attitude (on the basis of his mental ingredients). We are now going to analyze these mental ingredients.

First of all, in our cognitive view of the trust phenomenon we claim that only an agent endowed with goals and beliefs can "trust" another agent.

Let us consider the trust of an agent X towards another agent Y about the ( Y's) behaviour/action α relevant for the result (goal) g when:

  • X is the (relying) agent, who feels trust; it is a cognitive agent endowed with internal explicit goals and beliefs;

  • Y is the agent or entity which is trusted;

  • X trusts Y about g/ α and for g/ α.

In our model Y is not necessarily a cognitive agent (for instance, an agent can — or cannot — trust a chair as for as to sustain his weight when he is seated on it). On the contrary, X must always be a cognitive agent: so, in the case of artificial agents we should be able to simulate these internal explicit goals and beliefs.

However, in this chapter we will consider both the trustor and the trustee as cognitive agents since we model social trust.

An interesting question is: which kind of mental ingredients are necessary? Just informational or also motivational attitudes?

For all the three notions of trust defined above (trust disposition, decision to trust, and delegation) we claim that someone trusts some other one only relatively to some goal, i.e., for something he wants to achieve, that he desires. An unconcerned agent does not really "trust": he just has opinions and forecasts.

Second, trust itself consists of beliefs. We do not analyze here the "feeling" of trust, and trust as an affective disposition or an implicit attitude.

Since Y'saction is useful to X (trust disposition), and X has decided to rely on it (decision to trust), this means that X might delegate (act of trusting) some action/goal in his own plan to Y. This is the strict relation between core trust, decision to trust and delegation.

Trust disposition and decision to trust are the mental counter-part of delegation.

Basic Beliefs and Goals in Trust Disposition

Let us start from the most elementary case of trust: it is useful in fact to identify which ingredients are really basic in the mental state of trust disposition.

First, X has some sort of goal g that X has to achieve; for this goal X tries to evaluate the possibility of using Y. It is relevant to underline that we use the term "goal" in a rather broad sense, as it is frequently used in Artificial Intelligence and in psychology. It does not refer only to objectives to be achieved by some action, but to states of the world subjectively desirable and positive. For example, having a sunny day for the weekend, that Mary loves me, not to be noticed in public, winning a lottery, eating, satisfy hunger, etc., can all be goals. Some goals that are not realized, are not self-realizing, can be realized, and require my action, become (generate) intentions and objectives.

For having a positive trust disposition, X must have some specific beliefs:

  • Competence Belief: a sufficient evaluation of Y's abilities is necessary; X should believe that Y is useful for this goal of X, that Y can produce/ provide the expected result, that Y can play such a role in X's plan/action.

  • Willingness Belief :X should think that Y not only is able and can do that action/task, but Y actually will do what X needs (under given circumstances). This belief makes the trustee's behaviour predictable.

These are the two prototypical components of trust as an attitude towards Y. They are the real cognitive kernel of trust (Figure 2).

G0:Goalx(g)

B1:BxCany(α,g)

positive evaluation

B2:Bx<WillDoy(α)>g

prediction


Figure 2: Trust Disposition

We have used the Meyer, van Linder, van der Hoek et al.'s logics (Meyer, 1992; van Linder, 1996), introducing some "ad hoc" predicate (like WillDo). B is the believe operator (the classical doxastic operator).

It is important to underline that on the one hand, the two beliefs included in this mental state are not simple and neutral judgements, because they should be considered with respect to the X's goal g. On the other hand, they are not again explicit specific expectations because, before the decision to trust is taken, the X's goal is not yet linked with these beliefs and does not produce again new X's goals about them.

Additional Cognitive Ingredients for the Decision to Trust

In the second kind of trust (decision to trust) are contained all the ingredients of the first kind with, in addition, some other beliefs and goals; in particular:

  • Dependence Belief: X believes — to trust Y and delegate to it — that either X needs it, X depends on it (strong dependence) (Sicman et al., 1994), or at least that it is better to X to rely rather than not to rely on it (weak dependence) (Jennings, 1993).

In other terms, when X trusts someone, X is in a strategic situation (Deutsch, 1973): X believes that there is interference (Castelfranchi, 1998) and that his rewards, the results of his projects, depend on the actions of another agent Y.

The (strong or weak) dependence belief hides, in fact, the comparison among Y's trustworthiness and others' trustworthiness (X included) in that specific context and on that specific task.

When X decides to trust, X has also two new goals:

  • the goal that Y has the competence for that task;

  • the goal that Y has the willingness for that task.

These two new goals with the previous corresponding beliefs should be considered as the positive expectation of competence and of willingness. In making the decision to trust Y, X believes that Y has the right qualities, power, ability, competence, and disposition for g. Thus, the trust that X has in Y is (clearly and importantly) part of (and is based on) her or his esteem, her or his "image," her or his reputation (Dasgupta, 1990; Raub & Weesie, 1990).

A positive expectation is the combination of a goal and a belief about the future (prediction): X both believes that g and X desires/intends that g.

In this case, X both believes that Y can and will do and X desires/wants that Y can and will do.

In addition, there is also:

  • the goal of not pursuing g by itself and not searching for alternatives to Y.

In Figure 3 we resume the main mental ingredients for the decision to trust.

click to expand
Figure 3: Decision to Trust

Of course, there is a coherence relation between these two aspects of trust (trust disposition and decision to trust); the decision of betting and wagering on Y is grounded on and justified by the beliefs: B1, B2, and B3. More than this, the degree or strength (see later) of trust must be sufficient to decide to rely and bet on Y (Marsh, 1994; Snijders & Keren, 1996). The trustworthiness beliefs about Y (trust disposition) are the presupposition of the act of trusting Y.

Given this strict relation and the foundational role of delegation we need to define delegation and its levels and to also clarify differences between delegation, decision to trust, and trust disposition. After this we will go back to examine the mental ingredients of trust and their social significance.

Delegation/Reliance

In delegation or reliance, the delegating agent (X) needs or likes an action of the delegated agent (Y) and includes it in his own plan: X relies on Y. X plans to achieve g through Y. So, X is constructing a multi-agent plan and Y has a share in this plan: Y's delegated task is either a state-goal or an action-goal (Castelfranchi & Falcone, 1997).

We have classified delegation in three main categories: weak, mild and strong delegation.

In weak delegation there is no influence from X to Y, no agreement: generally, Y is not aware of the fact that X is exploiting his action. As an example of weak and passive, but already social, delegation, which is the simplest form of social delegation, consider a hunter who is waiting and is ready to shoot an arrow at a bird flying towards its nest. In his plan the hunter includes an action of the bird: to fly in a specific direction; in fact, this is why he is not pointing at the bird but at where the bird will be in a second. He is delegating to the bird an action in his plan; and the bird is unconsciously and unintentionally collaborating with the hunter's plan.

In stronger forms of delegation, X can himself act by eliciting, inducing the desired Y's behaviour to exploit it. Depending on the reactive or deliberative character of Y, the induction is just based on some stimulus or is based on beliefs and complex types of influence. In these cases we have mild delegation.

Strong delegation is based on Y's awareness of X's intention to exploit his action; normally it is based on Y adopting X's goal (for any reason: love, reciprocation, common interest, etc.), possibly after some negotiation (request, offer, etc.) concluded by some agreement and social commitment.

In our general view, trust is the mental counter-part of delegation, even if considering the possible temporal gap between the decision to trust and the delegation we have to consider some other interesting mental elements. (The temporal gap ranges between 0 and; 0 means that we have delegation at the same time of decision to trust; means that delegation remains just a potential action.) In particular we have in all the cases (weak, mild and strong delegation) an X's intention — that Y will achieve the task (Grosz & Kraus, 1996). In every case this intention is composed by different intentions:

  • In weak delegation, we have three additional intentions (I1, I2, and I3 in Figure 4), respectively the intention to rely upon the Y' s action; the intention not to (or do not delegate to others) that action; and the intention to not provide an obstacle to that action with other interference actions.

    click to expand
    Figure 4: Weak Delegation

  • In mild delegation, in addition to I1, I2, and I3, there is another intention (I 4), that is, the X' s intention to influence Y in order that Y will achieve τ (Figure 5).

    click to expand
    Figure 5: Mild Delegation

  • In strong delegation, in addition to I1, I2, and I3, there are two other intentions (I4, I5), that are respectively, the X' s intention to request of doing τ, and the intention of having confirmation about the Y' s commitment on τ (see Figure 6).

    click to expand
    Figure 6: Strong Delegation

Consider that in mild and strong delegation the intentions are already present in the decisional phase and they are the result of an evaluation. For example, X has to evaluate if the delegation will be successful or not in the case of influence, request, etc.

Trust and Delegation

There are important differences, and some independence, between trust and delegation.

We will use the three argument predicate — Trust (XY τ) — to denote a specific mental state compound of other more elementary mental attitudes (beliefs, goals, etc.). While we use a predicate Delegate (X Y τ) to denote the action and the resulting relation between X and Y.

Delegation necessarily is an action, a result of a decision, and it also creates and is a (social) relation among X, Y, and α, The external, observable action/behaviour of delegating either consists of the action of provoking the desired behaviour, of convincing and negotiating, of charging and empowering, or just consists of the action of doing nothing (omission), waiting for and exploiting the behaviour of the other.

There may be trust without delegation: either the level of trust is not sufficient to delegate, or the level of trust would be sufficient, but there are other reasons preventing delegation (for example, prohibitions).

So, trust is normally necessary for delegation, but it is not sufficient; delegation requires a richer decision.

There may be delegation without trust: these are exceptional cases in which either the delegating agent is not free (coercive delegation: suppose that you don't trust at all a drunk guy as a driver, but you are forced by his gun to let him drive your car) or he has no information and no alternative to delegating, so that he must just make a trial (blind delegation).

The decision to delegate has no degrees: either X delegates or X does not delegate. Indeed trust has degrees: X trusts Y more or less relatively to α. And there is a threshold under which trust is not enough for delegating.

Social Trust: Trusting Cognitive Agents

When applied to cognitive intentional agents, the basic beliefs of trust need to be articulated in (and supported by) other beliefs. In fact, how can one predict/expect that an intentional agent will do something, unless on the basis of the "intentional stance," i.e., on the basis of beliefs about its motives, preferences, intentions, and commitments?

All this must be combined with different kinds of delegation. There are various kinds, levels and specific ingredients of trust in relation to the kind of delegation and the degree of autonomy.

Trust in Weak Delegation

Weak delegation does not presuppose any agreement, deal or promise: for example, X weakly delegate when, being at a bus stop, X relies on another person to raise his arm and stop the bus, predicting that he will do this, and risking to miss his bus.

When applied to a cognitive, intentional agent, weak delegation implies that the "will-do" belief be articulated in and supported by a couple of other beliefs(that will continue to be valid also in strong delegation):

Willingness Belief: X believes that Y has decided and intends to do τ. In fact, for this kind of agent to do something, it must intend to do it. So trust requires modelling the mind of the other.

Persistence Belief: X should also believe that Y is stable enough in his intentions, that has no serious conflicts about τ (otherwise he might change his mind), or that Y is not unpredictable by character, etc.

When X relies on Y for his action, X is taking advantage of his independent goals and intentions, predicting his behaviour on such a basis (or — in the case of mild delegation — X is himself inducing such goals in order to exploit his behaviour). In any case, X not only believes that Y is able to do and can do (opportunity), but also that Y will do because he is committed to this intention or plan (not necessarily to X) (Cohen & Levesque, 1990).

Self-confidence Belief: X should also believe that Y knows that he can do τ. Thus he is self-confident. It is difficult to trust someone that does not trust himself!

Let's simplify and formalize this.

Introducing some "ad hoc" predicate (like WillDo, or Persist) we might characterise social trust mental state (in the logics of Meyer, 1992; van Linder, 1996), as follows:

Trust(X,Y,τ)

(GoalXg) AND

(BX(PracPossY(α,g))) AND

(BX(PreferX(DoneY(α,g)),(DoneX(α,g)))) AND

(BX(IntendY(α,g) AND PersistY(α,g)) AND

(GoalX(IntendY(α,g) AND PersistY(α,g)))

Where: PracPossY(α,g) = <DoY(α)>g AND AbilityY(α). To formalize results and opportunities, this formalism borrows constructs from dynamic logic: <Doi(α)>g denotes that agent i has the opportunity to perform the action α in such a way that g will result from this performance.

In other words, trust is a set of mental attitudes characterizing the "delegating" agent's mind which prefers another agent doing the action α, Y is a cognitive agent, so X believes that Y intends to do the action and Y will persist in this.

Trust in Strong Delegation: Adoption-Based Trust

Let us arrive to social trust in strong delegation, which is its typical and strict sense in the social sciences. The mental attitude is the same (that is why it is important to relate trust to any level of delegation), i.e., all previous beliefs hold, but there are some specific additional features. Strong Delegation is in fact based on Y's awareness and implicit or explicit agreement (compliance); it presupposes goal-adoption by Y. Thus to trust Y in this case means to trust his agreement and willingness to help/adopt (social commitment).

Trusting Motivation and Morality of Y

First of all, it is very important to analyze the beliefs about the motives of Y. In particular, it is crucial to specify the beliefs about the adoptive (helping) attitude of Y and its motives and persistence.

Motivation Belief. X believes that Y has some motives for helping him (for adopting his goal), and that these motives will probably prevail — in case of conflict — on other motives, negative for his.

Notice that motives inducing to adoption are of several different kinds: from friendship to altruism, from morality to fear of sanctions, from exchange to common goal (cooperation).

This is why, for example, it is important to have common culture, shared values, the same acknowledged authorities between trustor and trustee. The belief in shared values or in acknowledged authority (Conte et al., 1998) is an evidence and a basis for believing that Y is sensible to certain motives and that they are important and prevailing.

In particular, beliefs about Y's morality are relevant for trusting him. When there is a promise, not only Y has an intention to do α, but he has such an intention (also) because he is "committed to" X to do α; there is an (explicit or implicit) promise to do so which implies an interpersonal duty (X has some rights on Y: to pretend, to complain, etc.) (Castelfranchi, 1996) and — in organisations, institutions, societies — an obligation (derived from social norms) to do α (since he promised so to X).

An additional trust is needed: the belief that Y has been sincere (if he said that he intends to do it, he really intends to do it) and that he is honest/truthful (if he has a made a commitment, he will keep his promise; he will do what he ought to do).

On such a basis — of his adoptive disposition — X supports his beliefs that "y intends to do" and that "he will persist," and then the belief that he "will do."

Only this kind/level of social trust can really be "betrayed": if Y is not aware of or didn't (at least implicitly) agree about X's reliance and trust, he is not really "betraying" X: Y would not be "responsible" for not doing τ.

Internal Attribution of Trust: Trustworthiness

We should distinguish between trust 'in' someone or something that has to act and produce a given performance thanks to its internal characteristics, and the global trust in the global event or process and its result, which is also affected by external factors like opportunities and interferences.

Trust in Y (for example, "social trust" in its strict sense) seems to consists in the two first prototypical beliefs/evaluations we identified as the basis for reliance: ability/competence (that with cognitive agents includes knowledge and self-confidence), and disposition (that with cognitive agents includes willingness, persistence, engagement, etc.). Evaluation about opportunities is not really an evaluation about Y (at most, the belief about its ability to recognize, exploit and create opportunities is part of our trust "in" Y). We should also add an evaluation about the probability and consistency of obstacles, adversities, and interferences.

We will call this part of the global trust (the trust "in" Y relative to its internal powers — both motivational and competential powers) internal trust (or subjective trustworthiness). In fact, this trust is based on an "internal causal attribution" (to Y from the point of view of X) on the causal factors/probabilities of the successful or unsuccessful event.

Trust can be said to consist of or better to (either implicitly or explicitly) imply the subjective probability of the successful performance of a given behaviour α, and it is on the basis of this subjective perception/evaluation of risk and opportunity that the agent decides to rely or not, to bet or not on Y. However, the probability index is based on, derives from, those beliefs and evaluations. In other terms the global, final probability of the realisation of the goal g, i.e., of the successful performance of a<eth>, should be decomposed into the probability of Y performing the action well (that derives from the probability of willingness, persistence, engagement, competence: internal attribution) and the probability of having the appropriate conditions (opportunities and resources external attribution) for the performance and for its success, and of not having interferences and adversities (external attribution). Why is this decomposition important? Not only for cognitively rounding such a probability (which after all is "subjective" i.e., mentally elaborated) — and this cognitive embedding is fundamental for relying, influencing, persuading, etc., but because:

  1. the agent trusting/delegating decision might be different with the same global probability or risk, depending on its composition;

  2. trust composition (internal Vs external) produces completely different intervention strategies: to manipulate the external variables (circumstances, infrastructures) is completely different than manipulating internal parameters.

Let's consider the first point. There might be different heuristics or different personalities with different propensity to delegate or not in case of a weak internal trust (subjective trustworthiness) even with the same global risk. For example, "I completely trust him but he cannot succeed, it is a very hard task!" or "The mission/task is not difficult, but I do not have enough trust in him." The problem is that — given the same global expectation — one agent might decide to trust/rely in one case, but not in the other, or vice-versa! In fact, in these terms this is an irrational and psychological bias. But this bias might be adaptive, perhaps even useful with artificial agents. There could be logical and rational meta-considerations about a decision even in these apparent indistinguishable situations. Two possible examples of these meta-considerations are:

  • to give trust (and then delegation) increases the experience of an agent (so comparing two different situations — one in which we attribute low trustworthiness to the agent and the other in which we attribute high trustworthiness to him, obviously, the same resulting probability — we have a criteria for deciding);

  • the trustier can learn different things from the two possible situations; for example with respect to the agents; or with respect to the environments.

As for point (b), the strategies to establish for incrementing trust are very different depending on the external or internal attribution of your diagnosis of lack of trust. If there are adverse environmental or situational conditions, your intervention will be in establishing protection conditions and guarantees, in preventing interferences and obstacles, in establishing rules and infrastructures; while if you want to increase your trust in your contractor you should work on his motivation, beliefs and disposition towards you, or on his competence, self-confidence, etc.

We should also consider the reciprocal influence between external and internal factors. When X trusts the internal powers of Y, it also trusts his abilities to create positive opportunities for success, to perceive and react to the external problems. Vice-versa, when X trusts the environmental opportunities, this valutation could change the trust about Y (X could think that Y is not able to react to specific external problems).

Environmental and situational trust (which are claimed to be so crucial in electronic commerce and computer mediated interaction) are aspects of the external trust. Is it important to stress that when the environment and the specific circumstances are safe and reliable, less trust in Y (the contractor) is necessary for delegation (for example, for transactions).

Vice-versa, when X strongly trusts Y, his capacities, willingness and faithfulness, X can accept a less safe and reliable environment (with less external monitoring and authority).

We account for this "complementariety" between the internal and the external components of trust in Y for g in given circumstances and a given environment.

However, we have not to identify "trust" with "internal or interpersonal or social trust" and claim that when trust is not there, there is something that can replace it (for example, surveillance, contracts, etc.). It is just matter of different kinds or better facets of trust.

Is Trust a Belief in the Other's Irrationality?

Trust needed in promises, in contracts, in business, in organisations and collaboration has been the object of study in the social sciences. They correctly stress the relationship between sincerity, honesty (reputation), friendliness and trust.

However, sometimes this has not been formulated in a very linear way, especially under the perspective of game theory and within the framework of the Prisoner Dilemma that strongly influenced all the problem of defection, cheating, and social dilemma.

Consider for example the definition of trust proposed by Gambetta (1990) in his interdisciplinary discussion on trust.

"When I say that I trust Y, I mean that I believe that, put on test, Y would act in a way favourable to me, even though this choice would not be the most convenient for him at that moment."

So formulated, (considering subjective rationality) trust is the belief that Y will choose and will behave in a non-rational way! How might he otherwise choose what is perceived as less convenient? This is the usual dilemma in the PD game: the only rational move is to defeat.

Since trust is one of the pillars of society (no social exchange, alliance, cooperation, institution or group is possible without trust), should we conclude that the entire society is grounded on the irrationality of the agents: either the irrationality of Y, or the irrationality of X in believing that Y will act irrationally, against his better interest!

As usual in arguments and models inspired by rational decision theory or game theory, with rationality, also "selfishness" and "economic motives" (utility, profit) are smuggled (Castelfranchi & Conte, 1993).

When X trusts Y in strong delegation (goal-adoption and social commitment by Y) X is not assuming that he — by not defeating him — acts irrationally, i.e., against his interests. Perhaps he acts "economically irrationally" (i.e., sacrificing his economic goals); perhaps he acts in an unselfish way, preferring to his selfish goals some altruistic or pro-social or normative motive; but he is not irrational because he is just following his subjective preferences and motives, including friendship, or love, or norms, or honesty, etc.

Thus when X trusts Y, X is just assuming that other motivations will prevail over his economic interests or other selfish goals. We can say that: trust is a theory and an expectation about the kind of motivations the agent is endowed with, and about which will be the prevailing motivations in case of conflict.

X not only believes that Y will intend and persist (and then he will do), but X believes that Y will persist because of certain motives of his that are more important than other motives inducing him to defection and betrayal. And these motives are already there — in Y's mind and in their agreement — X does not have to find new incentives, to think of additional prizes or of possible punishments. If X is doing so (for example, by promising or threatening) X does not really trust Y (yet).

Internal Trust, Unharmfullness and Goodwill

The theory of the internally attributed trust is important also for clarifying and modelling an additional nucleus of trust that we call "unharmfullness" or "adoptivity" and rapidly putting it aside. In fact, we precisely refer to a sense of safety, a feeling that "there is nothing to worry about as from Y," no danger, no suspicion, no hostility and more than this the idea that the other will help, is well disposed, will care about our interests, will adopt them — the belief that "y would act in a way favourable to me" (Gambetta, 1990).

Now this nucleus is part of the internal trust, and precisely refers to the social aspect of it. We might say that the "trust in" Y distinguishes between Y's general mental attitudes relevant for a reliable action, and his "social" and more precisely "adoptive" or pro-social aspect, i.e., Y's disposition towards me (or towards people in general).

The first part of internal trust is important in any kind of delegation, but especially in weak-delegation where Y might ignore that we are exploiting his action, and does not usually care about our interests or goals. In this situation what is important is Y's personal commitment, his will, his persistence, his ability, etc. On the contrary, if our reliance/delegation is based on the fact that Y takes into account our goals and possibly favours them (goal-adoption, help) (Castelfranchi, 1991; Miceli et al., 1995) or at least avoids doing damage to them (collaborative coordination) (Castelfranchi, 1998), passive goal-adoption (Castelfranchi, 1996), or even strongly if it is based on Y's social commitment (promises, role, etc.) towards us, in this case what we believe about his disposition towards us and our interest is very crucial and is a relevant basis of our trust. In fact Strong Delegation presupposes Y's goal-adoption, her or his acceptance of my reliance, her or his adhesion to my (implicit or explicit) request or her or his spontaneous help. And X believes that his bet will be successful thanks to Y's acceptance and adoption and willingness to be helpful. Moreover, X trusts on the persistence of the collaborative attitude or pro-social and on its prevalence on interfering motives. This is what (Bonnevier-Tuomela, 1999) interestingly proposes to identify as Y's "good-will" (although she aims to put in it all the different aspects of the internal trust in cognitive agent, that we prefer to distinguish).

In sum, if Y' s goal-adoption is the basis of X' s delegation, then X counts on Y' s adoption of her or his goal, not simply on Y' s action or intention. X believes/trusts that Y will do α for X! More precisely, trust that Y will do α is implied and supported by the trust that Y will do α for X.

This is why the beliefs about the social disposition of Y are crucial (although we know that goal adoption can be for several kinds of motives, from selfish to altruistic) (Conte & Castelfranchi, 1995).

So, we claim that internal attribution distinguishes among three areas:

  • the ability of Y (skilful, know how, careful and accurate, self-confidence);

  • the non-social generic motivational aspects (intends, persists, is seriously engaged, effort);

  • the social attitudes, these basically consist in the belief/feeling that there is a pro-attitude (Tuomela, 1995), a "goodwill" towards (also) us and that there are no conflictual (anti-social) motives or, in any case, the adoptive attitude (for whatever motivation,) will prevail. The weaker form of this component of trust is the belief of "unharmfullness": there is no danger, no hostility, no defeating attitudes, no antipathy, etc.

Doubts, suspicions, can separately affect these three facets of our internal trust 'in' Y.

He is not so expert, or so skilled, or so careful, or he is not enough smart or reactive to recognise and exploit opportunities or to cope with interferences, he is quite voluble, or not enough engaged and putting effort, or he has conflict of preferences: he will and will not at the same time, there is some unconscious hostility, some antipathy, he does not care so much about me, he is quite selfish and egocentric.

In this framework it is quite clear why we trust friends. First we believe that as friends they want our good, they want to help us; thus they both will adopt our request and will keep their promise. Moreover, they do not have reasons for damaging us or for secretly harboring antipathy against us. Even if there is some conflict, some selfish interest against us, friendship will be more important for them. We rely on the motivational strength of friendship.

Open Delegation and Delegation of Control

With cognitive, autonomous agents it is possible to have "open delegation." In Open delegation (Castelfranchi & Falcone, 1998) X delegates to Y a goal to be achieved rather than a specific performance. Y has to "bring about that g"; he should find a correct plan, chose, plan, adapt, and so on. X either ignores or does not specify the necessary action or plan. Y is more autonomous, and X must trust also Y's cognitive ability in choosing and planning; X is in fact depending not only on Y's resources and practical abilities, but also on Y's problem-solving capacity and knowledge: he must be competent to solve the delegated problem. In social trust we are really betting on Y's mind.

The deepest level of trust with a fully autonomous agent is the delegation of or the renunciation of the monitoring and control. X is so sure that Y will do what X expects (for example, what he promised) that X does not check up or inspects. In fact when we monitor or inspect somebody who is doing something we need, he can complain with us and say: "this means that you don't trust me!!" Of course, renouncing the control increases the risk, since it increases the possibility that X is deceived and delays possible repairs or protections.




L., Iivonen M. Trust in Knowledge Management Systems in Organizations2004
WarDriving: Drive, Detect, Defend, A Guide to Wireless Security
ISBN: N/A
EAN: 2147483647
Year: 2004
Pages: 143

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net