Properly stall coroutine witnesses in new solver by compiler-errors · Pull Request #138845 · rust-lang/rust (original) (raw)

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation115 Commits6 Checks6 Files changed

Conversation

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})

compiler-errors

Fixes rust-lang/trait-system-refactor-initiative#82.

Using an infer var for the witness meant that if we constrain the infer var during writeback and then try to normalize during writeback, after the coroutine witness has been plugged into the coroutine type - which we do with the new solver - we may encounter a query cycle due to trying to fetch the coroutine witness types.

This PR changes the AnalysisInBody typing mode to track all coroutines being defined by the current body during typeck, and forces any auto trait and Copy obligations that would require fetching the hidden types of these coroutines to be forced ambiguous. This also introduces a new proof tree visitor which detects which obligations should be stalled due to bottoming out in one of these ambiguous obligations, so we can re-check them after borrowck (as is done with the old solver).

This PR shouldn't have functional changes, but post-mortem seems to have introduced a perf regression. Looking at the code, I don't see much of a reason why this would be the case. We don't call the new query when the old solver is active, nor should be be visiting any of this new unstalling code in the old solver.

r? lcnr

@rustbot

Some changes occurred to the core trait solver

cc @rust-lang/initiative-trait-system-refactor

changes to inspect_obligations.rs

cc @compiler-errors, @lcnr

compiler-errors

compiler-errors

/// entered before passing `value` to the function. This is currently needed for
/// `normalize_erasing_regions`, which skips binders as it walks through a type.
///
/// TODO: doc

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to explain that this doesn't return all ambiguous preds, just the ones that are stalled on coroutines.

@rust-log-analyzer

This comment has been minimized.

jhpratt added a commit to jhpratt/rust that referenced this pull request

Mar 24, 2025

@jhpratt

Tweaks to writeback and Obligation -> Goal conversion

Each of these commits are self-contained, but are prerequisites that I'd like to land before rust-lang#138845, which still needs some cleaning.

The ""most controversial"" one is probably Explicitly don't fold coroutine obligations in writeback, which I prefer because I think using fold_predicate to control against not normalizing predicates seems... easy to mess up 🤔, and we could have other things that we don't want to normalize.

Explicitly noting whether we want resolve to normalize is a lot clearer (and currently in writeback is limited to resolving stalled coroutine obligations), since we can attach it to a comment that explains why.

rust-timer added a commit to rust-lang-ci/rust that referenced this pull request

Mar 24, 2025

@rust-timer

Rollup merge of rust-lang#138846 - compiler-errors:stall-prereqs, r=lcnr

Tweaks to writeback and Obligation -> Goal conversion

Each of these commits are self-contained, but are prerequisites that I'd like to land before rust-lang#138845, which still needs some cleaning.

The ""most controversial"" one is probably Explicitly don't fold coroutine obligations in writeback, which I prefer because I think using fold_predicate to control against not normalizing predicates seems... easy to mess up 🤔, and we could have other things that we don't want to normalize.

Explicitly noting whether we want resolve to normalize is a lot clearer (and currently in writeback is limited to resolving stalled coroutine obligations), since we can attach it to a comment that explains why.

@bors

lcnr

lcnr

fn visit_goal(&mut self, inspect_goal: &super::inspect::InspectGoal<'_, 'tcx>) -> Self::Result {
inspect_goal.goal().predicate.visit_with(self)?;
if let Some(candidate) = inspect_goal.unique_applicable_candidate() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this type visitor feels somewhat fragile and I expect unique_applicable_candidate and the limited recursion depth to cause us to fail to stall obligations in very rare cases. otoh I don't think this is a problem though

so my understanding here is:

Please add this as a comment somewhere, prolly the stalled_coroutine_obligations field of the typeck results

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's my understanding. We could perhaps stall obligations if we find coroutines in the predicate or if we hit the recursion limit, but idk if we have a facility to detect when we hit the recursion limit here. Shouldn't be too hard to fix, but I'd rather leave that to when we need it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment somewhere

@compiler-errors

Let's see how bad the perf is from making items larger.

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@compiler-errors

@rust-timer

This comment has been minimized.

bors added a commit to rust-lang-ci/rust that referenced this pull request

Mar 25, 2025

@bors

Properly stall coroutine witnesses in new solver

TODO: write description

r? lcnr

@bors

@rust-log-analyzer

This comment has been minimized.

@bors

☀️ Try build successful - checks-actions
Build commit: 5443aaa (5443aaa4127ecdfcad1a50e7d7f2e4650bb52877)

@rust-timer

This comment has been minimized.

@rust-timer

Finished benchmarking commit (5443aaa): comparison URL.

Overall result: ❌ regressions - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌ (primary) 0.3% [0.1%, 0.5%] 71
Regressions ❌ (secondary) 0.3% [0.1%, 0.5%] 38
Improvements ✅ (primary) - - 0
Improvements ✅ (secondary) - - 0
All ❌✅ (primary) 0.3% [0.1%, 0.5%] 71

Max RSS (memory usage)

Results (primary 1.2%, secondary -1.8%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌ (primary) 1.5% [0.5%, 3.9%] 18
Regressions ❌ (secondary) 2.5% [1.0%, 3.9%] 3
Improvements ✅ (primary) -1.6% [-2.5%, -0.7%] 2
Improvements ✅ (secondary) -3.7% [-6.8%, -0.9%] 7
All ❌✅ (primary) 1.2% [-2.5%, 3.9%] 20

Cycles

Results (secondary -1.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌ (primary) - - 0
Regressions ❌ (secondary) 2.0% [2.0%, 2.0%] 1
Improvements ✅ (primary) - - 0
Improvements ✅ (secondary) -2.7% [-4.5%, -1.0%] 2
All ❌✅ (primary) - - 0

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 777.999s -> 780.062s (0.27%)
Artifact size: 365.81 MiB -> 365.88 MiB (0.02%)

@compiler-errors

Let me try putting coroutines into the same list as the opaques 🤔

@lcnr

alternatively, intern TypingEnv itself. We should only very rarely access its value and it's already 2 ptrs wide

lcnr

lcnr

lcnr

lcnr

Contributor

@lcnr lcnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nits, r=me

@compiler-errors

@lcnr

@bors

📌 Commit f943f73 has been approved by lcnr

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors

Status: Waiting on bors to run and complete tests. Bors will change the label on completion.

and removed S-waiting-on-author

Status: This is awaiting some action (such as code changes or more information) from the author.

labels

Apr 23, 2025

bors added a commit to rust-lang-ci/rust that referenced this pull request

Apr 23, 2025

@bors

bors added a commit to rust-lang-ci/rust that referenced this pull request

Apr 23, 2025

@bors

@bors

@bors

@github-actions GitHub Actions

What is this?This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.

Comparing df35ff6 (parent) -> fa58ce3 (this PR)

Test differences

Show 134 test diffs

134 doctest diffs were found. These are ignored, as they are noisy.

Test dashboard

Run

cargo run --manifest-path src/ci/citool/Cargo.toml --
test-dashboard fa58ce343ad498196d799a7381869e79938e952a --output-dir test-dashboard

And then open test-dashboard/index.html in your browser to see an overview of all executed tests.

Job duration changes

  1. x86_64-apple-2: 4955.8s -> 3888.9s (-21.5%)
  2. dist-x86_64-apple: 10084.1s -> 8480.5s (-15.9%)
  3. dist-aarch64-apple: 5197.0s -> 5840.1s (12.4%)
  4. dist-x86_64-linux: 5057.5s -> 5606.1s (10.8%)
  5. dist-arm-linux: 5355.5s -> 5839.8s (9.0%)
  6. dist-x86_64-mingw: 8157.6s -> 7444.0s (-8.7%)
  7. x86_64-apple-1: 9455.9s -> 8658.0s (-8.4%)
  8. dist-ohos: 9786.6s -> 10609.3s (8.4%)
  9. dist-i686-mingw: 8660.5s -> 8055.6s (-7.0%)
  10. aarch64-apple: 3851.1s -> 4084.9s (6.1%) How to interpret the job duration changes?

Job durations can vary a lot, based on the actual runner instance
that executed the job, system noise, invalidated caches, etc. The table above is provided
mostly for t-infra members, for simpler debugging of potential CI slow-downs.

@bors bors mentioned this pull request

Apr 24, 2025

@rust-timer

Finished benchmarking commit (fa58ce3): comparison URL.

Overall result: ❌ regressions - please read the text below

Our benchmarks found a performance regression caused by this PR.
This might be an actual regression, but it can also be just noise.

Next Steps:

@rustbot label: +perf-regression
cc @rust-lang/wg-compiler-performance

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌ (primary) 0.2% [0.1%, 0.4%] 25
Regressions ❌ (secondary) 0.4% [0.1%, 0.6%] 48
Improvements ✅ (primary) -0.2% [-0.2%, -0.2%] 1
Improvements ✅ (secondary) - - 0
All ❌✅ (primary) 0.2% [-0.2%, 0.4%] 26

Max RSS (memory usage)

Results (primary 0.3%, secondary -1.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌ (primary) 1.9% [0.5%, 2.9%] 3
Regressions ❌ (secondary) 2.0% [1.6%, 2.3%] 5
Improvements ✅ (primary) -2.1% [-3.8%, -0.4%] 2
Improvements ✅ (secondary) -4.1% [-8.0%, -1.4%] 6
All ❌✅ (primary) 0.3% [-3.8%, 2.9%] 5

Cycles

Results (primary -0.6%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌ (primary) - - 0
Regressions ❌ (secondary) - - 0
Improvements ✅ (primary) -0.6% [-0.6%, -0.5%] 4
Improvements ✅ (secondary) - - 0
All ❌✅ (primary) -0.6% [-0.6%, -0.5%] 4

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 775.888s -> 775.233s (-0.08%)
Artifact size: 365.06 MiB -> 365.14 MiB (0.02%)

@nnethercote

TODO: write description

lol, lmao

This was referenced

Apr 24, 2025

@lcnr

slightly bigger perf impact than expected by the previous perf run 🤔 unsure what caused it and it feels minor enough for me to not look too deeply into this.

@compiler-errors

Yeah, weirdly this is functionally equivalent to the changes I perf'd in #138845 (comment).

The fact that rebasing + perf testing it again (#138845 (comment)) led to a regression, and then a worse regression after rebasing again suggests that there's some performance instability here rather than something that can be optimized.

@rylev

As has been pointed out above, the regressions are small enough that this isn't a huge concern, and the regressions themselves might be due to some underlying perf instability.

@rustbot label: +perf-regression-triaged

@lcnr lcnr mentioned this pull request

May 1, 2025

4 tasks