Improve VecCache under parallel frontend by Mark-Simulacrum · Pull Request #124780 · rust-lang/rust (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation89 Commits1 Checks12 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).
With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
try-job: i686-gnu-nopt
try-job: dist-x86_64-linux
rustbot added A-query-system
Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html)
Status: Awaiting review from the assignee but also interested parties.
Relevant to the compiler team, which will review and decide on the PR/issue.
labels
This comment has been minimized.
bors added a commit to rust-lang-ci/rust that referenced this pull request
[WIP] Improve VecCache under parallel frontend
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (<0.1%).
With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
FIXME: Extract the internals to rustc_data_structures, safety comments, etc.
r? @Mark-Simulacrum -- opening for perf first.
This comment has been minimized.
☀️ Try build successful - checks-actions
Build commit: 15ee677 (15ee67784efd5d46670c56f52fd618a4a00771e5)
This comment has been minimized.
| impl SlotIndex { |
| #[inline] |
| fn from_index(idx: u32) -> Self { |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tried to write a branchless version of this?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| let index = match current { |
|---|
| 0 => return None, |
| // Treat "initializing" as actually just not initialized at all. |
| // The only reason this is a separate state is that `complete` calls could race and |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we ensure that each query only executes once, complete will only be called once per key, so this race can't happen.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to avoid a non-local safety condition like that. The performance gain from avoiding a true lock here isn't meaningful anyway.
Finished benchmarking commit (15ee677): comparison URL.
Overall result: ❌✅ regressions and improvements - ACTION NEEDED
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.
@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
This is a highly reliable metric that was used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 0.8% | [0.2%, 2.9%] | 169 |
| Regressions ❌ (secondary) | 0.9% | [0.2%, 2.3%] | 70 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | -1.8% | [-2.3%, -0.4%] | 5 |
| All ❌✅ (primary) | 0.8% | [0.2%, 2.9%] | 169 |
Max RSS (memory usage)
Results
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 1.8% | [1.8%, 1.8%] | 1 |
| Regressions ❌ (secondary) | 3.0% | [2.6%, 3.8%] | 4 |
| Improvements ✅ (primary) | -3.0% | [-7.1%, -1.1%] | 54 |
| Improvements ✅ (secondary) | -8.2% | [-13.3%, -4.6%] | 22 |
| All ❌✅ (primary) | -2.9% | [-7.1%, 1.8%] | 55 |
Cycles
Results
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 1.2% | [0.8%, 2.1%] | 13 |
| Regressions ❌ (secondary) | 2.4% | [1.4%, 3.2%] | 6 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | - | - | 0 |
| All ❌✅ (primary) | 1.2% | [0.8%, 2.1%] | 13 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 676.55s -> 678.643s (0.31%)
Artifact size: 315.72 MiB -> 316.01 MiB (0.09%)
Mark-Simulacrum changed the title
[WIP] Improve VecCache under parallel frontend Improve VecCache under parallel frontend
@bors try @rust-timer queue
No major changes, but let's collect another more recent run. Presuming results are about the same I'll mark this ready for review, the improvement in scalability IMO outweighs slight regressions to single-threaded compilation.
This comment has been minimized.
bors added a commit to rust-lang-ci/rust that referenced this pull request
Improve VecCache under parallel frontend
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).
With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
☀️ Try build successful - checks-actions
Build commit: d447427 (d447427ec5ed8db4f1baad0135b5304baef82ce9)
This comment has been minimized.
Finished benchmarking commit (d447427): comparison URL.
Overall result: ❌✅ regressions and improvements - ACTION NEEDED
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.
@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
This is a highly reliable metric that was used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 0.7% | [0.2%, 2.6%] | 171 |
| Regressions ❌ (secondary) | 1.1% | [0.2%, 10.0%] | 99 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | -2.0% | [-5.2%, -0.4%] | 8 |
| All ❌✅ (primary) | 0.7% | [0.2%, 2.6%] | 171 |
Max RSS (memory usage)
Results (primary -3.4%, secondary -8.1%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | - | - | 0 |
| Regressions ❌ (secondary) | 3.2% | [2.5%, 3.9%] | 2 |
| Improvements ✅ (primary) | -3.4% | [-8.2%, -0.9%] | 70 |
| Improvements ✅ (secondary) | -9.2% | [-14.0%, -3.3%] | 21 |
| All ❌✅ (primary) | -3.4% | [-8.2%, -0.9%] | 70 |
Cycles
Results (primary 2.2%, secondary 2.3%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 2.2% | [1.5%, 3.1%] | 17 |
| Regressions ❌ (secondary) | 2.3% | [1.4%, 2.9%] | 22 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | - | - | 0 |
| All ❌✅ (primary) | 2.2% | [1.5%, 3.1%] | 17 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: missing data
Artifact size: 319.77 MiB -> 319.42 MiB (-0.11%)
r? compiler
As noted in the PR description this is a regression for cycles/time on non-parallel builds (more code to execute). That's largely offset by far better scalability to more threads in parallel compilers. I think the trade-off is worth it, particularly since this also appears to be a memory usage win.
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (<0.1%).
With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
This would instead require us to adjust the tests to not reach the isize::MAX limit on 32 bit targets 🤔 i think an abort here is totally acceptable as we already abort if the layout is small enough, but we fail to allocate
Yeah, that seems obvious now that you say it :)
Pushed up a revert of most of the 32-bit changes, we were already aborting/panicking on allocation failure so I think most of those changes weren't even necessary in the first place, we just needed to avoid the tests hitting the LayoutError -- which we now do, tweaking to test the full (sparse) range on 64-bit Linux and otherwise limiting to a smaller range to avoid wasting a bunch of memory.
@bors try @rust-timer queue
This comment has been minimized.
1 similar comment
@bors try
Hopefully this works, I think bors forgot about the PR...
bors added a commit to rust-lang-ci/rust that referenced this pull request
Improve VecCache under parallel frontend
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).
With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
try-job: i686-gnu-nopt try-job: dist-x86_64-linux
☀️ Try build successful - checks-actions
Build commit: a185d27 (a185d27b07d8de5f7a07525cef0701b8364b4862)
This comment has been minimized.
Finished benchmarking commit (a185d27): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.
@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 1.0% | [0.2%, 3.3%] | 214 |
| Regressions ❌ (secondary) | 1.1% | [0.2%, 3.0%] | 165 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | -1.4% | [-3.1%, -0.1%] | 9 |
| All ❌✅ (primary) | 1.0% | [0.2%, 3.3%] | 214 |
Max RSS (memory usage)
Results (primary -2.9%, secondary -2.1%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 2.3% | [1.6%, 2.8%] | 4 |
| Regressions ❌ (secondary) | 3.1% | [0.7%, 7.5%] | 24 |
| Improvements ✅ (primary) | -3.2% | [-6.9%, -0.8%] | 85 |
| Improvements ✅ (secondary) | -6.4% | [-12.5%, -1.7%] | 29 |
| All ❌✅ (primary) | -2.9% | [-6.9%, 2.8%] | 89 |
Cycles
Results (primary 2.1%, secondary 2.7%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 2.1% | [1.0%, 4.5%] | 31 |
| Regressions ❌ (secondary) | 2.7% | [1.2%, 5.2%] | 23 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | - | - | 0 |
| All ❌✅ (primary) | 2.1% | [1.0%, 4.5%] | 31 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 787.86s -> 792.684s (0.61%)
Artifact size: 335.54 MiB -> 335.95 MiB (0.12%)
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r=me if you're happy with perf
@bors r=lcnr
Yeah, within reasonable bounds, though obviously still room for improvement.
📌 Commit da58efb has been approved by lcnr
It is now in the queue for this repository.
bors added S-waiting-on-bors
Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
and removed S-waiting-on-review
Status: Awaiting review from the assignee but also interested parties.
labels
Finished benchmarking commit (5926e82): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Our benchmarks found a performance regression caused by this PR.
This might be an actual regression, but it can also be just noise.
Next Steps:
- If the regression was expected or you think it can be justified,
please write a comment with sufficient written justification, and add@rustbot label: +perf-regression-triagedto it, to mark the regression as triaged. - If you think that you know of a way to resolve the regression, try to create
a new PR with a fix for the regression. - If you do not understand the regression or you think that it is just noise,
you can ask the@rust-lang/wg-compiler-performanceworking group for help (members of this group
were already notified of this PR).
@rustbot label: +perf-regression
cc @rust-lang/wg-compiler-performance
Instruction count
This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 1.0% | [0.2%, 3.3%] | 214 |
| Regressions ❌ (secondary) | 1.0% | [0.2%, 2.9%] | 172 |
| Improvements ✅ (primary) | - | - | 0 |
| Improvements ✅ (secondary) | -1.9% | [-3.0%, -0.4%] | 8 |
| All ❌✅ (primary) | 1.0% | [0.2%, 3.3%] | 214 |
Max RSS (memory usage)
Results (primary -3.0%, secondary -2.7%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 2.8% | [2.0%, 4.1%] | 5 |
| Regressions ❌ (secondary) | 3.6% | [1.8%, 8.6%] | 16 |
| Improvements ✅ (primary) | -3.3% | [-7.5%, -1.2%] | 82 |
| Improvements ✅ (secondary) | -6.3% | [-12.4%, -1.5%] | 28 |
| All ❌✅ (primary) | -3.0% | [-7.5%, 4.1%] | 87 |
Cycles
Results (primary 2.0%, secondary 2.1%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) | 2.1% | [1.0%, 4.1%] | 29 |
| Regressions ❌ (secondary) | 2.6% | [1.8%, 3.8%] | 27 |
| Improvements ✅ (primary) | -2.0% | [-2.0%, -2.0%] | 1 |
| Improvements ✅ (secondary) | -2.2% | [-3.1%, -1.4%] | 3 |
| All ❌✅ (primary) | 2.0% | [-2.0%, 4.1%] | 30 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 792.184s -> 793.779s (0.20%)
Artifact size: 334.48 MiB -> 335.25 MiB (0.23%)
Visiting for rustc-perf triage
As a quick process note: I'm not sure it was a great idea for @Mark-Simulacrum to add the rustc-perf-triaged label back on June 15th (i.e. before this had landed and in fact before it had gotten r+'ed), because in theory that could cause us to miss a case where new commits are added and the regression effectively becomes untriaged.
But having said that, ...
... as far as I can tell, the regression here remains roughly within the bounds that were anticipated back when this PR's description was first composed... back circa june 15th 9th we were looking at a worst case of 2.60% and a mean of 0.7%, while now it is a worst case of 3.3% and a mean of 1.0%...
so that all seems ballpark within what was anticipated, and the expected gains for parallel compilation performance probably justify paying this cost as it stands.
So, I'll leave the rustc-perf-triaged label in place.
Did you try to avoid the large first bucket (for example, make them all point to the same allocation), to simplify indexing?
Also put for the present array can avoid the compare_exchange, which should help performance.