For example, suppose that you have a class that stores user information and service information in two separate hash tables, as shown in Listing 3. Here, the accessor methods for user and service data are synchronized, which means that they are synchronizing on the AttributesStore object. While this is perfectly thread-safe, it increases the likelihood of contention for no real benefit. If a thread is executing setUserInfo , it means that not only will other threads be locked out of setUserInfo and getUserInfo , as is desired, but they will also be locked out of getServiceInfo and setServiceInfo.
This problem can be avoided by having the accessor simply synchronize on the actual objects being shared the userMap and servicesMap objects , as shown in Listing 4. Now threads accessing the services map will not contend with threads trying to access the users map. In this case, the same effect could also be obtained by creating the maps using the synchronized wrapper mechanism provided by the Collections framework, Collections.
Assuming that requests against the two maps are evenly distributed, in this case this technique would cut the number of potential contentions in half. One of the most common contention bottlenecks in server-side Java applications is the HashMap. Applications use HashMap to cache all sorts of critical shared data user profiles, session information, file contents , and the HashMap. For example, if you are writing a Web server, and all your cached pages are stored in a HashMap , every request will want to acquire and hold the lock on that map, and it will become a bottleneck.
We can extend the lock granularity technique to handle this situation, although we must be careful as there are some potential Java Memory Model JMM hazards associated with this approach.
Two Part Contention
The LockPoolMap in Listing 5 exposes thread-safe get and put methods, but spreads the synchronization over a pool of locks, reducing contention substantially. LockPoolMap is thread-safe and functions like a simplified HashMap , but has more attractive contention properties. Instead of synchronizing on the entire map on each get or put operation, the synchronization is done at the bucket level. For each bucket, there's a lock, and that lock is acquired when traversing a bucket either for read or write.
The locks are created when the map is created there would be JMM problems if they were not. If you create a LockPoolMap with many buckets, many threads will be able to use the map concurrently with a much lower likelihood of contention. However, the reduced contention does not come for free. By not synchronizing on a global lock, it becomes much more difficult to perform operations that act on the map as a whole, such as the size method. An implementation of size would have to sequentially acquire the lock for each bucket, count the number of nodes in that bucket, and release the lock and move on to the next bucket.
But once the previous lock is released, other threads are now free to modify the previous bucket. By the time size finishes calculating the number of elements, it could well be wrong. However, the LockPoolMap technique works quite well in some situations, such as shared caches. Table 1 compares the performance of three shared map implementations; a synchronized HashMap , an unsynchronized HashMap not thread-safe , and a LockPoolMap.
- SYSTEMANTICS. THE SYSTEMS BIBLE.
- Threading lightly, Part 2: Reducing contention!
- Zen and the Art of Screenwriting Volume 2: More Insights and Interviews.
- Knife Edge.
- Dave Brubeck.
- Reducing contention.
- Two Part Contention by Dave Brubeck on Apple Music.
The unsynchronized version is present only to show the overhead of contention. A test that does random put and get operations on the map was run, with a variable number of threads, on a dual-processor system Linux system using the Sun 1. The table shows the run time for each combination. This test is somewhat of an extreme case; the test programs do nothing but access the map, and so there will be many more contentions than there would be in a realistic program, but it is designed to illustrate the performance penalty of contention.
While all the implementations exhibit similar scaling characteristics for large numbers of threads, the HashMap implementation exhibits a huge performance penalty when going from one thread to two, because there will be a contention on every single put and get operation. With more than one thread, the LockPoolMap technique is approximately 15 times faster than the HashMap technique. This difference reflects the time lost to scheduling overhead and to idle time spent waiting to acquire locks.
Two-Part Contention - Album Version
The advantage of LockPoolMap would be even larger on a system with more processors. Another technique that may improve performance is called "lock collapsing" see Listing 6. Recall that the methods of the Vector class are nearly all synchronized. Imagine that you have a Vector of String values, and you are searching for the longest String. Suppose further that you know that elements will be added only at the end, and that they will not be removed, making it mostly safe to access the data as shown in the getLongest method, which simply loops through the elements of the Vector , calling elementAt to retrieve each one.
The getLongest2 method is very similar, except that it obtains the lock on the Vector before starting the loop. The result of this is that when elementAt attempts to acquire the lock, the JVM sees that the current thread already has it, and will not contend.
You May Also Like
It lengthens the synchronized block, which appears to be in opposition to the "get in, get out" principle, but because it is avoiding so many potential synchronizations, it can be considerably faster, as less time will be lost to the scheduling overhead. On a dual-processor Linux system running the Sun 1.
On each listening, I experience a certain anxiety at this indecision, and then finally surprise and relief when I have metaphorically crossed the tightwire and am once more back to the platform. Tracks 1, 2, 3, 5, 7, 8 and 9 recorded on April 18, ; tracks 4 and 6 recorded on April From Wikipedia, the free encyclopedia.
- Von der EAI-Strategie zur Umsetzung: Plattform versus Best of Breed (German Edition).
- Navigation menu!
- Two Part Contention.
- The Snail Shell.
- The Economist: Business Consulting: A Guide to How it Works and How to Make it Work?
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Malcolm Myers Two Part Contention - Hiro Fine Art
January Learn how and when to remove this template message. The Penguin Guide to Jazz Recordings 9th ed. A Celebration So What's New? Retrieved from " https: Articles needing additional references from January All articles needing additional references Articles with hAudio microformats.
Views Read Edit View history. This page was last edited on 21 June , at By using this site, you agree to the Terms of Use and Privacy Policy.
July 16, [1]. April 18—19, Dave Brubeck's House, Oakland. Red Hot and Cool Brubeck Plays Brubeck