RFR (S): CR 8005926: (thread) Merge ThreadLocalRandom state into java.lang.Thread (original) (raw)

Peter Levart peter.levart at gmail.com
Wed Jan 16 09:23:57 UTC 2013


Hi Aleksey,

The test I used is simple:

https://raw.github.com/plevart/lambda-hacks/master/jdk-test/src/org/openjdk/tests/java/util/concurrent/atomic/RandomTest.java

I ran it on a i7-2600K (4 cores x 2 threads) PC with Linux OS and a JDK8 build from lambda repo using default JVM options (that means +UseCompressedOops enabled by default).

Can you give the sources of your tests so I could try to run them on my machine?

Regards, Peter

On 01/16/2013 10:13 AM, Aleksey Shipilev wrote:

Hi Peter,

This is an interesting experiment. On 01/16/2013 12:59 PM, Peter Levart wrote: I did some micro benchmarks and here are the results:

http://dl.dropbox.com/u/101777488/TLR/TLRbenchmarkresults.txt Results indicate that usage pattern: Thread.current().nextInt() is as fast as proposed variant while the nextInt() method itself is as fast as JDK7's, which is some 20% faster than proposed variant. I find this hard to believe, since the baseline experiment I did in the conceiving note in this thread actually tells otherwise: JDK8 (baseline, 4 threads) TLR.nextInt(): 6.4 +- 0.1 ns/op TLR.current().nextInt(): 16.1 +- 0.4 ns/op TL.get().nextInt(): 19.1 +- 0.6 ns/op JDK8 (patched, 4 threads) TLR.nextInt(): 6.5 +- 0.2 ns/op TLR.current().nextInt(): 6.4 +- 0.1 ns/op TL.get().nextInt(): 17.2 +- 2.0 ns/op That is, TLR.current().nextInt() is as fast as the already-acquired TLR.nextInt() even in the baselined case, which pretty much means we hit the lower bound for possible infrastructure overheads, and we actually compute. Note that the changes in next() did not degrade the performance either. Please check your microbenchmarks. So the alternative implementation seems to be faster and it has the following additional benefits: - Checks the calling thread and throws when called from invalid thread. This is the spec change; potentially breaks the code (instead of "just" degrading performance). I would not like to see that pushed into JDK. - Could reinstate the padding fields (or @Contended long rnd) if needed (the tests were done without padding) Hardly a benefit, since Thread can be padded as well, and padding there will yield better footprint. -Aleksey.



More information about the core-libs-dev mailing list