6.6 Protocols


6.6 Protocols

Systems provide a useful way of representing situations. But where does the system come from? Changes often occur as a result of actions. These actions, in turn, are often performed as a result of agents using a protocol or strategy.

Actions change the global state. Typical actions include tossing heads, going left at an intersection, and sending a message. A protocol for agent i is a description of what actions i may take as a function of her local state. For simplicity, I assume here that all actions are deterministic, although protocols may be nondeterministic or probabilistic. Thus, for example, Alice's protocol may involve tossing a coin in some round. I view "coin tossing" as consisting of two deterministic actions—tossing heads and tossing tails (denoted toss-heads and toss-tails, respectively).

This can be formalized as follows. Fix a set Li of local states for agent i (intuitively, these are the local states that arise in some system) and a set ACTi of possible actions that agent i can take. A protocol Pi for agent i is a function that associates with every local state in Li a nonempty subset of actions in ACTi. Intuitively, Pi() is the set of actions that agent i may perform in local state . The fact that Pi is a function of the local state of agent i, and not of the global state, is meant to capture the intuition that an agent's actions can be a function only of the agent's information.

If Pi is deterministic, then Pi prescribes a unique action for i at each local state; that is, |Pi()|= 1 for each local state in Li. For protocols that are not deterministic, rather than just describing what actions agent i may take at a local state, it is often useful to associate a measure of likelihood, such as probability, possibility, or plausibility, with each action. A probabilistic protocol for i is a protocol where each local state is associated with a probability measure over a subset of actions in ACTi. Thus, if P is a probabilistic protocol that involves tossing a fair coin at some local state , then P () = {toss-heads, toss-tails}, where each of these actions is performed with probability 1/2.

Just like the agents, the environment has a protocol Pe, which is a map from Le, the set of possible environment states, to nonempty subsets of ACTe, the set of possible environment actions. The environment's protocol models those features that are beyond the control of the agents, such as when messages get delivered in a distributed system, what the weather will be like, or the type of opponent a player will face in a game.

In general, agents do not run their protocols in isolation. A joint protocol (Pe, P1,, Pn), consisting of a protocol for the environment and a protocol for each of the agents, associates with each global state a subset of possible joint actions, that is, a subset of ACT = ACTe ACT1 ACTn. If each of the "local" protocols that make up the joint protocol is probabilistic, then a probability on the joint actions can be obtained by treating each of the local protocols as independent. Thus, for example, if Alice and Bob each toss a coin simultaneously, then taking the coin tosses to be independent leads to an obvious measure on {toss-heads, toss-tails} {toss-heads, toss-tails}. (In most of the examples given here, I ignore the environment protocol and the environment state. In general, however, it plays an important role.)

There is a minor technical point worth observing here. Although I am taking the local protocols to be independent, this does not mean that there cannot be correlated actions. Rather, it says that if there are, there must be something in the local state that allows this correlation. For example, suppose that Alice and Bob each have two coins, one of bias 2/3 and one of bias 1/3. Charlie has a fair coin. Alice and Bob observe Charlie's coin toss, and then use the coin of bias 2/3 if Charlie tosses heads and the coin of bias 1/3 if Charlie tosses tails. Alice and Bob's protocols are still independent. Nevertheless, Alice getting heads is correlated with Bob getting heads. The correlation is due to a correlation in their local states, which reflect the outcome of Charlie's coin toss.

Joint actions transform global states. To capture their effect, associate with every joint action a function from global states to global states. For example, the joint action consisting of Alice choosing 1 and Bob doing nothing maps the initial global state (〈〉, 〈〉, 〈〉) to the global state (1, 1, tick). Given a joint protocol and a set of initial global states, it is possible to generate a system in a straightforward way. Intuitively, the system consists of all the runs that are obtained by running the joint protocol from one of the initial global states. More formally, say that run r is consistent with protocol P if it could have been generated by P, that is, for all m, r(m + 1) is the result of applying a joint action a that could have been performed according to protocol P to r(m). (More precisely, there exists a joint action a = (a1, , an) such that ai Pi(ri(m)) and r(m + 1) = a(r(m)).) Given a set Init of global states, the system (P, Init) consists of all the runs consistent with P that start at some initial global state in Init. The system represents P if = (P, Init) for some set Init of global states.

If P is a probabilistic joint protocol (i.e., if each component is probabilistic), represents P, and there is a probability on the initial global states in , then there is a straightforward way of putting a probability of by viewing as Markovian. The probability of a time-m event is just the probability of the initial state multiplied by the probabilities of the joint actions that generated it. This probability on runs can then be used to generate an SDP system, as discussed earlier.

Example 6.6.1

start example

If on sunny days Alice tosses a coin with bias 2/3, on cloudy days Alice tosses a coin with bias 1/4, and sunny days happen with probability 3/4 (these numbers do not necessarily correspond to reality!), the resulting system consists of four runs, in two computation trees, as shown in Figure 6.6. Now the probability of r1, for example, is 3/4 2/3 = 1/2. The probability of the other runs can be computed in a similar way.

end example

click to expand
Figure 6.6: Tossing a coin whose bias depends on the initial state.

This discussion assumes that the protocol was probabilistic and that there was a probability on the initial states. Although these assumptions are often reasonable, and they have always been made in the models used in game theory, situations where they do not hold turn out to be rather common in the distributed computing literature. I defer a more detailed look at this issue to Section 6.9.




Reasoning About Uncertainty
Reasoning about Uncertainty
ISBN: 0262582597
EAN: 2147483647
Year: 2005
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net