Section 17.3. Concurrency

17.3. Concurrency

When multiple computers must work together at the same time, all sorts of interference problems can arise. Shared object updates are a simple example. Unless an application is designed carefully , there is always the danger that one client will overwrite another client's data in a shared object. While there is a slot-level locking mechanism, there is no intrinsic mechanism for locking shared object slots for a controlled period of time.

In the database field, interference problems are well understood and often easily dealt with using the database's ability to lock records, maintain a multiversion consistency model, and to commit or roll back transactions. In real-time multiuser applications, the tools available to deal with interference problems are often not quite so advanced. Consequently, FlashCom developers need to be aware of the potential for problems and how to deal with them. Unfortunately, the subject of concurrency is a large one and cannot be covered in detail here. A good reference is the chapter on concurrency in recent editions of C. J. Date's book An Introduction to Database Systems (Addison Wesley). This section reviews a number of problems you may run into when developing FlashCom-enabled applications and some common ways to deal with them.

17.3.1. Serializing Requests and the ActionScript Thread

In all versions of FlashCom (up to and including 1.5.2), each application instance has one and only one ActionScript thread. For applications that use Server-Side ActionScript extensively throughout the instance's life, the single thread can be a performance bottleneck. So, some caution is required in designing applications in regard to how much server-side code must regularly be invoked.

In general, server-side code should be designed to run as quickly as possible and not spend a long time regularly performing complex calculations such as collision detection. However, being limited to a single thread is also a valuable resource for dealing with all sorts of concurrency problems.


Every call( ) and send( ) message is queued on the server if it cannot be handled immediately. If Server-Side ActionScript is invoked by a call( ) or send( ) message, each message is dealt with sequentially. There is no possibility that while one message is being processed by server-side code that another message can interfere with it. As already noted in Chapter 13 and Chapter 15, calling server-side code to get an exclusive lock on a resource is an excellent use of the call( ) method. Whenever a locking mechanism is needed or a particular order of operation must be enforced on individual clients , each client should call a server-side method using call( ) . The server-side script can then set values in a shared object to make every client aware of the correct state of the application or create an internal queue of events or objects to be processed in order.

Some caution is required with shared objects.

Shared object updates occur asynchronously so that, even while server-side code is executing, the values in a shared object can change.


The shared object data visible to a server-side script is not a snapshot in time. If calculations such as banking transactions must be performed using multiple slots, the only option is to make it impossible for clients to update the shared object directly. When sophisticated data management is required, a much better strategy is to have a database do the difficult work of managing transactions and use a shared object as a read-only way to make clients aware of the state of any part of the database that must be visible to clients.

17.3.2. Asynchronous Callbacks

Of course, calling an application server to reach a database takes time, so the Server-Side ActionScript thread cannot wait for the response. Instead, when a result is returned from the application server, the response is queued until the ActionScript thread can deal with it. In several cases, server-side scripts can take an action but will not get a result immediately. For example:

  • Using a NetConnection to connect to another instance

  • Calling a remote method on another instance

  • Calling a remote method on a client

  • Calling a remote method on a service using Flash Remoting

  • Playing a stream

  • Updating another instance's shared object

In every case in which a server-side script initiates a request, it will not get a response back immediately. In many cases, that means the thread must put the work it was doing on hold until it receives a response. A classic example of this type of problem is when a client attempts to connect and must be authenticated. The authentication step usually requires calling an application server using Flash Remoting. No further processing of the client is possible until the remote method returns. So the server-side code places the Client object in a queue and returns null from onConnect( ) , leaving the client in a pending state. When the result is returned, the server-side script must pick up where it left off. It retrieves the client from the queue and accepts or rejects the connection.

When multiple remote methods must be called, extra caution is required. For example, when an application instance must both call a remote method in order to initialize itself and also call a remote method to authenticate each client, there is no guarantee that the calls will return in the same order they were made. The initialization information may return before or after the authentication information. The strategy you develop for dealing with this kind of problem will depend on whether one call is dependent on the other. In this scenario, if the initialization information is required before an authentication call can be made, clients should be placed in a queue until initialization is complete. Then each client can be authenticated. To simplify processing, the client can be moved from the initialization queue into a pending authentication queue. If authentication can be performed before initialization, the client should be placed in a pending authentication queue and authenticated. When the authentication call returns, if the initialization call has not returned, the client can be placed into an initialization queue. The lobby.asc code in Example 16-2 showed how authentication can be performed before initialization of the instance is complete (see the onAuthenticate( ) method). Queuing clients during authentication is discussed in detail in Chapter 18.

17.3.3. Latency and Application State

The individual latency of clients connected to an instance can vary dramatically. A client on the same local network as the server may have very fast response times while a distant client on a remote network may not. For many applications, this doesn't really matter. However, for some games , auctions, certain online testing scenarios, and other applications, it does. Consider a simple quiz game in which a question is asked and the first person to indicate she has the answer gets the first try at answering. Each user clicks a button to indicate she wants to answer the question, and the Flash client uses the call( ) method to send a message to the server. The low-latency client has a distinct advantage over the high-latency client. For example, if the local client has a latency of 5 ms and the remote client a latency of 500 ms, the remote client must click the button half a second earlier in order to be judged first.

One way to deal with this sort of problem is to have each client timestamp her request. Provided a clock offset has been determined, as described earlier under "Clock Synchronization," a good approximation of the server time when each user clicked the button can be determined. Another solution is to have the client track the time between when the question was received and the button clicked and send that elapsed time to the server. In either case, when the first message arrives, the server-side script can set an interval equal to roughly one-and-a-half times the largest client latency. When the interval is over, the server can look at each time-corrected or delay-time message and choose the winner. The added delay can often be disguised by updating all the clients with partial information as soon as the first message arrives. For example, a sound can play and the button controls of each player can be disabled in response to the first message. When the winner is determined, the clients are sent another message so they know who will try to answer the question.

In other games, such as multiplayer Minesweeper, it is more important for each user to know that a square has been uncovered as soon as possible rather than who uncovered it. The process of providing feedback in two steps can be simplified to avoid an interval step. When a message first arrives that a user has chosen to uncover a square, the user's ID and corrected server time can be stored in a shared object slot representing the square. If another message arrives for the same square, its corrected server time can be compared with the server time in the shared object slot. If the message was sent earlier, the slot is updated with the more recent user ID and time. Users playing the game will likely be too busy trying to figure out what square to turn over to worry about who owns each square.

The same problemusers with low latency having an advantage over users with high latencyalso occurs when the server sends information to clients. For example, during an auction, someone with a low-latency connection will see or hear auction information before users with slower connections. If a live stream containing audio or video is used, very little can be done about it. However, data-only messages can be timestamped and sent out to each client with calculated delays so that each client receives a message at roughly the same time. Similarly, all messages can be timestamped and sent out immediately but displayed after a calculated delay on each client. In either case, the effect will be to slow down the auction to accommodate the user with the highest latency.

The solutions presented here assume that the client .swf has not been tampered with. That is, that the user is compliant or does not have the technical skill to crack the system. They should not be considered secure mechanisms for ensuring fairness.


Latency problems can compound one another. If no latency compensation is provided in an auction, users with low-latency connections will be provided information earlier and their bids will arrive sooner. Figure 17-1 shows the interaction of two clients and a server. The length of the arrows is not meaningful, but their slope is. Time is measured on the vertical axis increasing downward, so the more horizontal an arrow appears, the less the latency.

Client 1 has lower latency than Client 2 so messages arrive from the server sooner and take less time to deliver to the server. The illustration shows that Client 1 can work with a server message longer than Client 2 and still beat Client 2.

Figure 17-1. The low-latency client provides more time for the user to think than does the high-latency client

In other words, latency effects can add up. If Client 1 has a 5 ms latency and Client 2 has a 500 ms latency, then Client 1 has almost a full second advantage.

17.3.4. Living in the Future

Another technique for dealing with latency is to use predictive time management . Whenever it is possible to predict that an event will occur, a message including a timestamp of when the event will happen can be sent out in advance of the actual event. For example, if a projectile is fired , its trajectory can be calculated in advance, but it may not be possible to determine if another entity can redirect or stop its motion. At some point when it is determined that nothing can interfere with its travel, an event message containing the time and location of impact can be broadcast in advance so that every client will display the event at very nearly the same time.

When events cannot be predicted because of unpredictable user intervention, some other techniques are still available. In some cases, events can be deliberately desynchronized. In a game of Pong, the player who returns the ball can be presented with a slower version of the ball moving toward the opposing player's paddle. The opposing player is presented with a slightly faster version. When the opposing player returns the ball, the message of the return can reach the first player as he sees the ball contact his opponenet's paddle.

Using simple dead reckoning, you can sometimes use the calculated server time of each eventrather than the time the update is receivedto position each entity. As a consequence, every clip position will always be an extrapolation. In simulations or games in which sudden changes are not possible, extrapolation based on a common clock is possible, and often desirable.



Programming Flash Communication Server
Programming Flash Communication Server
ISBN: 0596005040
EAN: 2147483647
Year: 2003
Pages: 203

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net