4646474 : BigInteger.pow() algorithm slow in 1.4.0 (original) (raw)

Alan Eliasen eliasen at mindspring.com
Fri May 17 05:48:39 UTC 2013


On 05/14/2013 07:11 PM, Brian Burkhalter wrote:

The test in this issue report

http://bugs.sun.com/bugdatabase/viewbug.do?bugid=4646474 is pretty much useless for comparative purposes after applying 4837946 as the computed value is just a power of two which will be handled very quickly by the new algorithm. As a single alternate data point I modified the test instead to compute 17^13466917. The execution time on my MacBookPro5,3 before applying the 4837946 patch was 2336.975514 seconds and afterwards 79.202646 seconds: an improvement of about a factor of 30. I am sure that there are much more extreme examples than this but the result is good to see.

Yes, the extreme examples are ones like in the original bug report, which are literally millions of times faster. If the base contains a power of 2, (which is by far the most common case I see in numeric algorithms,) then performance can be be so drastically improved that it's hard to measure the actual ratio. For example, you've seen the workarounds in BigDecimal for slow 10^x calculations, which will be greatly improved by this patch and the toString() changes in the next phase.

Note that when Schoenhage-Strassen multiplication is included, the ratio will be even better. On an eight-core (using just one core) AMD FX-8350 at 4 GHz, the time for your example of 17^13466917 drops from 1412.522 seconds to 15.233 seconds, an improvement of a factor of about 93.

I linked this issue to 4837946 as a duplicate so it will be closed out at the same time.

That'll be great to see. Combined, these two bug reports are old enough to buy us drinks. I'm so proud of them. :)

Is there any plan to backport these to earlier JVMs?

-- Alan Eliasen eliasen at mindspring.com http://futureboy.us/



More information about the core-libs-dev mailing list