Implement MappedMutexGuard
, MappedRwLockReadGuard
, and MappedRwLockWriteGuard
. by zachs18 · Pull Request #117107 · rust-lang/rust (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation26 Commits10 Checks0 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
ACP: rust-lang/libs-team#260
Tracking issue: #117108
(Outdated)
MutexState
/RwLockState
structs
Having If this is not desired, then sys::(Mutex|RwLock)
and poison::Flag
as separate fields in the Mutex
/RwLock
would require MappedMutexGuard
/MappedRwLockWriteGuard
to hold an additional pointer, so I combined the two fields into a MutexState
/RwLockState
struct. This should not noticeably affect perf or layout, but requires an additional field projection when accessing the former .inner
or .poison
fields (now .state.inner
and .state.poison
).MappedMutexGuard
/MappedRwLockWriteGuard
can instead hold separate pointers to the two fields.
The doc-comments are mostly copied from the existing *Guard
doc-comments, with some parts from lock_api::Mapped*Guard
's doc-comments.
Unresolved question: Are more tests needed?
r? @m-ou-se
(rustbot has picked a reviewer for you, use r? to override)
rustbot added S-waiting-on-review
Status: Awaiting review from the assignee but also interested parties.
Relevant to the library team, which will review and decide on the PR/issue.
labels
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment was marked as resolved.
zachs18 marked this pull request as ready for review
Comment on lines 621 to 622
let mut orig = ManuallyDrop::new(orig); |
---|
let value = NonNull::from(f(&mut *orig)); |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this leak orig
if f
panics, leaving the lock permanently locked? I would also add a test for mutex_lock.map(|_| panic!())
or something like that.
The same applies to the other implementations.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that makes sense (edit: that the panic behavior should be documented and tested). I've refactored it so that the "old" guard is only wrapped in ManuallyDrop
after the closure completes, so panicking in the closure has the same effect as panicking while holding the guard normally (unlocks the guard; poisons for MutexGuard
and RwLockWriteGuard
, doesn't poison for RwLockReadGuard
). I've also added tests for panicking in the closures.
// was created, and have been upheld throughout `map` and/or `try_map`. |
---|
// The signature of the closure guarantees that it will not "leak" the lifetime of the reference |
// passed to it. If the closure panics, the guard will be dropped. |
let data = NonNull::from(f(unsafe { &mut *orig.lock.data.get() })); |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(applies to all the (try_)map
functions) I used the unsafe here instead of just &(mut) *orig
to be clearer that the lifetime of the reference is longer than the lifetime of the orig
variable itself. However I think the other version should still be sound, so I can change it back to that if it seems better.
Having
sys::(Mutex|RwLock)
andpoison::Flag
as separate fields in theMutex
/RwLock
would requireMappedMutexGuard
/MappedRwLockWriteGuard
to hold an additional pointer, so I combined the two fields into aMutexState
/RwLockState
struct. This should not noticeably affect perf or layout, but requires an additional field projection when accessing the former.inner
or.poison
fields (now.state.inner
and.state.poison
). If this is not desired, thenMappedMutexGuard
/MappedRwLockWriteGuard
can instead hold separate pointers to the two fields.
It does noticeably affect layout: making the states a separate struct
prevents "tucking in" the value field along with the poison flag, so e.g. a Mutex<bool>
for instance will now take up 12 bytes instead of 6 (on Linux). I'd recommend just storing a &Mutex<T>
instead of a &MutexState
in the guards and accessing the fields through that.
Yes, that's true, it can affect layout because it can add overall padding, I missed that.
However, MappedMutexGuard<U>
cannot hold a &Mutex<T>
without either adding a generic (somewhat defeating the usefulness of mapping guards), or using unsafe layout guarantees to transmute to &Mutex<()>
or something that does not name T
.
I'll go ahead and revert the MutexState
addition and change the MappedMutexGuard/MappedRwLockWriteGuard to hold two separate references to the sys::(Mutex|Rwlock)
and the poison::Flag
for now.
@rustbot author
rustbot added S-waiting-on-author
Status: This is awaiting some action (such as code changes or more information) from the author.
and removed S-waiting-on-review
Status: Awaiting review from the assignee but also interested parties.
labels
(makes implementing Mapped*Guard
easier)
This comment has been minimized.
@rustbot ready
Changed to holding separate references to the raw lock and the poison flag in MappedMutexGuard
and MappedRwLockWriteGuard
.
Possible future optimization
#[repr(C)] struct Mutex<T: ?Sized> { inner: sys::Mutex, poison: poison::Flag, data: UnsafeCell, }
With this hypothetical definition of Mutex
(with the #[repr(C)]
as a non-guaranteed implementation detail), reborrowing a &Mutex<T>
as a &Mutex<()>
(within std
) would be sound, and would allow holding a &'a Mutex<()>
in MappedMutexGuard
instead of a &'a sys::Mutex
and a &'a poison::Flag
.
(Similar for RwLock
)
rustbot added S-waiting-on-review
Status: Awaiting review from the assignee but also interested parties.
and removed S-waiting-on-author
Status: This is awaiting some action (such as code changes or more information) from the author.
labels
Co-authored-by: Amanieu d'Antras amanieu@gmail.com
📌 Commit 8aaa04b has been approved by Amanieu
It is now in the queue for this repository.
🌲 The tree is currently closed for pull requests below priority 50. This pull request will be tested once the tree is reopened.
bors added S-waiting-on-bors
Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
and removed S-waiting-on-review
Status: Awaiting review from the assignee but also interested parties.
labels
1 similar comment
Finished benchmarking commit (a2f3c0c): comparison URL.
Overall result: ❌ regressions - no action needed
@rustbot label: -perf-regression
Instruction count
This is a highly reliable metric that was used to determine the overall result at the top of this comment.
mean | range | count | |
---|---|---|---|
Regressions ❌ (primary) | - | - | 0 |
Regressions ❌ (secondary) | 0.4% | [0.4%, 0.4%] | 1 |
Improvements ✅ (primary) | - | - | 0 |
Improvements ✅ (secondary) | - | - | 0 |
All ❌✅ (primary) | - | - | 0 |
Max RSS (memory usage)
Results
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
mean | range | count | |
---|---|---|---|
Regressions ❌ (primary) | - | - | 0 |
Regressions ❌ (secondary) | - | - | 0 |
Improvements ✅ (primary) | -4.1% | [-4.1%, -4.1%] | 1 |
Improvements ✅ (secondary) | - | - | 0 |
All ❌✅ (primary) | -4.1% | [-4.1%, -4.1%] | 1 |
Cycles
Results
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
mean | range | count | |
---|---|---|---|
Regressions ❌ (primary) | - | - | 0 |
Regressions ❌ (secondary) | - | - | 0 |
Improvements ✅ (primary) | - | - | 0 |
Improvements ✅ (secondary) | -1.2% | [-1.6%, -0.9%] | 4 |
All ❌✅ (primary) | - | - | 0 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 651.212s -> 650.238s (-0.15%)
Artifact size: 311.01 MiB -> 311.09 MiB (0.03%)
Miri's run of the test suite showed new UB on apple for today's nightly, just after this PR landed. Could this bug have been introduced by this PR?
3.314536 test sync::rwlock::tests::frob ... error: Undefined Behavior: trying to retag from <42368726> for SharedReadOnly permission at alloc9742021[0x0], but that tag does not exist in the borrow stack for this location
0.000053 --> /home/runner/work/miri-test-libstd/miri-test-libstd/rust-src-patched/library/core/src/ptr/non_null.rs:401:18
0.000015 |
0.000009 401 | unsafe { &*self.as_ptr().cast_const() }
0.000010 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.000009 | |
0.000010 | trying to retag from <42368726> for SharedReadOnly permission at alloc9742021[0x0], but that tag does not exist in the borrow stack for this location
0.000008 | this error occurs as part of retag at alloc9742021[0x0..0x30]
0.000009 |
0.000009 = help: this indicates a potential bug in the program: it performed an invalid operation, but the Stacked Borrows rules it violated are still experimental
0.000009 = help: see https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/stacked-borrows.md for further information
0.000010 help: <42368726> was created by a SharedReadOnly retag at offsets [0x0..0x8]
0.000009 --> std_miri_test/../library/std/src/sys/locks/rwlock/queue.rs:348:32
0.000009 |
0.000009 348 | let mut next = ptr::from_ref(&node)
0.000009 | ^^^^^^^^^^^^^^^^^^^^
0.000011 help: <42368726> was later invalidated at offsets [0x8..0x10] by a write access
0.000010 --> std_miri_test/../library/std/src/sys/locks/rwlock/queue.rs:347:17
0.000008 |
0.000009 347 | node.prev = AtomicLink::new(None);
0.000009 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.000009 = note: BACKTRACE (of the first span):
0.000009 = note: inside `core::ptr::NonNull::<sys::locks::rwlock::queue::Node>::as_ref::<'_>` at /home/runner/work/miri-test-libstd/miri-test-libstd/rust-src-patched/library/core/src/ptr/non_null.rs:401:18: 401:46
0.000009 note: inside `sys::locks::rwlock::queue::add_backlinks_and_find_tail`
0.000009 --> std_miri_test/../library/std/src/sys/locks/rwlock/queue.rs:265:17
0.000009 |
0.000009 265 | next.as_ref().prev.set(Some(current));
0.000009 | ^^^^^^^^^^^^^
0.000008 note: inside `sys::locks::rwlock::queue::RwLock::unlock_queue`
0.000009 --> std_miri_test/../library/std/src/sys/locks/rwlock/queue.rs:487:33
0.000009 |
0.000008 487 | let tail = unsafe { add_backlinks_and_find_tail(to_node(state)) };
0.000009 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.000009 note: inside `sys::locks::rwlock::queue::RwLock::lock_contended`
0.000009 --> std_miri_test/../library/std/src/sys/locks/rwlock/queue.rs:383:25
0.000009 |
0.000008 383 | self.unlock_queue(next);
0.000009 | ^^^^^^^^^^^^^^^^^^^^^^^
0.000009 note: inside `sys::locks::rwlock::queue::RwLock::write`
0.000010 --> std_miri_test/../library/std/src/sys/locks/rwlock/queue.rs:312:13
0.000009 |
0.000009 312 | self.lock_contended(true)
0.000009 | ^^^^^^^^^^^^^^^^^^^^^^^^^
0.000009 note: inside `sync::rwlock::RwLock::<()>::write`
0.000009 --> std_miri_test/../library/std/src/sync/rwlock.rs:361:13
0.000009 |
0.000009 361 | self.inner.write();
0.000009 | ^^^^^^^^^^^^^^^^^^
0.000010 note: inside closure
0.000011 --> std_miri_test/../library/std/src/sync/rwlock/tests.rs:37:26
0.000009 |
0.000010 37 | drop(r.write().unwrap());
0.000008 | ^^^^^^^^^
Also see Zulip
Hm, the test in question does not use the Mapped*
guards. So maybe it's coincidence (the tests are somewhat probabilistic when it comes to concurrency).
Labels
This PR was explicitly merged by bors.
Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
Relevant to the library team, which will review and decide on the PR/issue.