RFR 8146991: Introduce per-worker preserved mark stacks in ParallelGC (original) (raw)
Tony Printezis tprintezis at twitter.com
Fri Mar 4 16:14:25 UTC 2016
- Previous message (by thread): RFR 8146991: Introduce per-worker preserved mark stacks in ParallelGC
- Next message (by thread): RFR 8146991: Introduce per-worker preserved mark stacks in ParallelGC
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Thomas,
I confirmed that the code below was incorrect and doing the addition before calling restore() fixes the issue. Latest webrev here:
http://cr.openjdk.java.net/~tonyp/8146991/webrev.2/
Tony
On March 3, 2016 at 8:26:00 PM, Tony Printezis (tprintezis at twitter.com) wrote:
Thomas,
Sorry, just realized I made a mistake:
64 void PreservedMarksSet::restore() { 65 size_t total_size = 0; 66 for (uint i = 0; i < _num; i += 1) { 67 get(i)->restore(); 68 total_size += get(i)->size(); 69 } I should of course add the size before restore() is called (as it will be 0 afterwards). I’ll fix that tomorrow. Let me know if there’s anything else I should change...
Tony
On March 3, 2016 at 7:15:13 PM, Tony Printezis (tprintezis at twitter.com) wrote:
Thomas (and all),
Updated webrev with the changes below:
http://cr.openjdk.java.net/~tonyp/8146991/webrev.1/
Tony
On March 3, 2016 at 7:02:03 PM, Tony Printezis (tprintezis at twitter.com) wrote:
Thomas.
Thanks for looking at it. See inline...
On March 3, 2016 at 9:35:26 AM, Thomas Schatzl (thomas.schatzl at oracle.com) wrote:
Hi Tony,
On Tue, 2016-03-01 at 13:22 -0500, Tony Printezis wrote:
Hi all,
These are the per-worker preserved mark stack changes for ParallelGC: http://cr.openjdk.java.net/~tonyp/8146991/webrev.0/ They are similar to the ParNew changes and rely on some classes I introduced then.
- In PSPromotionManager::initialize(), please extract the "ParallelGCThreads + 1" calculation into a local variable.
Done.
- nitpick: the comment in PSPromotionManager::post_scavenge(), since it is a proper sentence, start with upper case and end with full stop.
Done.
- in PreservedMarksSet::restore(), I would prefer if the total_size were calculated as the marks are restored. I do not see a particular reason why you would need to calculate this beforehand, and it is an extra loop over a potentially large number of threads.
I can accumulate the size incrementally, so the output should be done at the end of the method then?
Like PreservedMarks::get() returned the number of preserved marks, and the method just added them together.
- I personally dislike the use of "counter-variable += 1" in the increment section of for-loops, and prefer using a post-decrement. Feel free to ignore this comment.
I’ve always done it this way, ++i / i++ as a statement just looks weird...
Quick question on this: I moved:
logtrace(gc, ergo)("Restoring " SIZEFORMAT " marks", totalsize()); to the shared code so that the message is logged irrespective of GC (for example, I don’t think we were logging this for ParNew or DefNew). Is “ergo” the right category here though?
- I would remove the "ergo" tag, and try to make the message more meaningful, as the "gc" tag is the catch-all bucket. E.g. something like "Restoring... marks after evacuation failure.".
Done. I’ll post a new webrev shortly…
Tony
Otherwise it looks good to me. I will run it through jprt and some test list.
Thanks, Thomas
Tony Printezis | JVM/GC Engineer / VM Team | Twitter
@TonyPrintezis tprintezis at twitter.com
Tony Printezis | JVM/GC Engineer / VM Team | Twitter
@TonyPrintezis tprintezis at twitter.com
Tony Printezis | JVM/GC Engineer / VM Team | Twitter
@TonyPrintezis tprintezis at twitter.com
Tony Printezis | JVM/GC Engineer / VM Team | Twitter
@TonyPrintezis tprintezis at twitter.com
-------------- next part -------------- An HTML attachment was scrubbed... URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20160304/17ebb024/attachment.htm>
- Previous message (by thread): RFR 8146991: Introduce per-worker preserved mark stacks in ParallelGC
- Next message (by thread): RFR 8146991: Introduce per-worker preserved mark stacks in ParallelGC
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]