Regression (b86): OOM on parallel stream (original) (raw)

Brian Goetz brian.goetz at oracle.com
Fri Apr 19 05:44:04 PDT 2013


Yes, in parallel, a limit operation (currently) needs to buffer its entire results. There are optimizations we can apply (not yet done) on SIZED streams and UNORDERED streams that remove this restriction, but in the general case, there's going to be a lot of buffering when run in parallel. This is because limit() is constrained to delivering the elements in encounter order.

Though in this case, buffering 200000 longs should not run out of memory; your heap size is probably tiny -- and the 10m wait was GC thrashing.

On 4/19/2013 8:08 AM, Mallwitz, Christian wrote:

Hi,

The following throws (after a 10+ minute wait) an OOM - removing the parallel() bit produces the expected result of 200000. Thanks Christian public class OOM { public static void main(String[] args) { System.out.println( java.util.stream.Streams.iterate(1L, n -> n + 1L) .parallel() .filter(l -> l % 100 == 0).limit(200000).count()); } } Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.stream.SpinedBuffer.ensureCapacity(SpinedBuffer.java:129) at java.util.stream.Nodes$SpinedNodeBuilder.begin(Nodes.java:1278) at java.util.stream.Sink$ChainedReference.begin(Sink.java:252) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:452) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:443) at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:328) at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:273) at java.util.stream.AbstractTask.compute(AbstractTask.java:284) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:710) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:260) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1012) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1631) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



More information about the lambda-dev mailing list