One of the primary reasons to use threads is to improve performance.[1] Using threads can improve resource utilization by letting applications more easily exploit available processing capacity, and can improve responsiveness by letting applications begin processing new tasks immediately while existing tasks are still running.
[1] Some might argue this is the only reason we put up with the complexity threads introduce.
This chapter explores techniques for analyzing, monitoring, and improving the performance of concurrent programs. Unfortunately, many of the techniques for improving performance also increase complexity, thus increasing the likelihood of safety and liveness failures. Worse, some techniques intended to improve performance are actually counterproductive or trade one sort of performance problem for another. While better performance is often desirableand improving performance can be very satisfyingsafety always comes first. First make your program right, then make it fastand then only if your performance requirements and measurements tell you it needs to be faster. In designing a concurrent application, squeezing out the last bit of performance is often the least of your concerns.
Introduction
Part I: Fundamentals
Thread Safety
Sharing Objects
Composing Objects
Building Blocks
Part II: Structuring Concurrent Applications
Task Execution
Cancellation and Shutdown
Applying Thread Pools
GUI Applications
Part III: Liveness, Performance, and Testing
Avoiding Liveness Hazards
Performance and Scalability
Testing Concurrent Programs
Part IV: Advanced Topics
Explicit Locks
Building Custom Synchronizers
Atomic Variables and Nonblocking Synchronization
The Java Memory Model