22.7 Thread-worker-pool Strategy

Team-FLY

22.7 Thread-worker -pool Strategy

A thread-worker-pool strategy creates a fixed number of workers at the beginning of execution instead of creating a new thread each time a connection is made. Thread-worker-pool implementations have several advantages over thread-per-request implementations.

  • The cost of creating worker threads is incurred only at startup and does not grow with the number of requests serviced.

  • Thread-per-request implementations do not limit the number of simultaneous active requests, and the server could run out of file descriptors if requests come in rapid succession. Thread-worker-pool implementations limit the number of open file descriptors based on the number of workers.

  • Because thread-worker-pool implementations impose natural limits on the number of simultaneous active requests, they are less likely to overload the server when a large number of requests come in.

Write a thread_worker_pool server that takes the listening port number and the number of worker threads as command-line arguments. Create the specified number of worker threads before accepting any connections. Each worker thread calls u_accept and handles the connection directly.

Although POSIX specifies that accept be thread-safe, some systems have not yet complied with this requirement. One way to handle this problem is to do your own synchronization. Use a single statically initialized mutex to protect the call to u_accept . Each thread locks the mutex before calling u_accept and unlocks it when u_accept returns. In this way, at most one thread at a time can be waiting for a connection request. As soon as a request comes in, the worker thread unlocks the mutex, and another thread can begin waiting.

Team-FLY


Unix Systems Programming
UNIX Systems Programming: Communication, Concurrency and Threads
ISBN: 0130424110
EAN: 2147483647
Year: 2003
Pages: 274

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net