Tracking Issue for int_lowest_highest_one (original) (raw)
Feature gate: #![feature(int_lowest_highest_one)]
This is a tracking issue for the lowest_one and highest_one methods on primitive integer types and their NonZero counterparts.
These methods provide an efficient way to find the index of the least significant (lowest) and most significant (highest) set bit in an integer. They are common, low-level operations often supported directly by hardware (e.g. BSF/BSR on x86) and frequently used in systems programming, such as in OS kernels for managing bitmasks.
Public API
impl {u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize} { const fn lowest_one(self) -> Option; const fn highest_one(self) -> Option; }
// For all T in the impl block above.
impl NonZero {
const fn lowest_one(self) -> u32;
const fn highest_one(self) -> u32;
}
Steps / History
- Implementation:
- ACP Accepted with amendments: first_set_bit and last_set_bit for integer types libs-team#631
- Implementation PR: Implement feature int_lowest_highest_one for integer and NonZero types #145381
- Final comment period (FCP)1
- Stabilization PR
Unresolved Questions
None.
Resolved Questions
- Methods name
- Although the general agreement is to use
lowest_oneandhighest_one, we want to maintain consistency with ACP: add least_significant_one and most_significant_one to integer types and NonZero types libs-team#467 (Tracking Issue for isolate_most_least_significant_one #136909). - Renaming done by PR num: Rename isolate_most_least_significant_one functions #144971, which seems to confirm
lowest_oneandhighest_onefor this feature.
- Although the general agreement is to use