#[repr(C, packed)]pub struct BitPtr<M, O = Lsb0, T = usize> where
M: Mutability,
O: BitOrder,
T: BitStore, { /* private fields */ }
Expand description
Pointer to an individual bit in a memory element. Analagous to *bool
.
Original
*bool
and
NonNull<bool>
API Differences
This must be a structure, rather than a raw pointer, for two reasons:
- It is larger than a raw pointer.
- Raw pointers are not
#[fundamental]
and cannot have foreign implementations.
Additionally, rather than create two structures to map to *const bool
and
*mut bool
, respectively, this takes mutability as a type parameter.
Because the encoded span pointer requires that memory addresses are well aligned, this type also imposes the alignment requirement and refuses construction for misaligned element addresses. While this type is used in the API equivalent of ordinary raw pointers, it is restricted in value to only be references to memory elements.
ABI Differences
This has alignment 1
, rather than an alignment to the processor word. This is
necessary for some crate-internal optimizations.
Type Parameters
M
: Marks whether the pointer permits mutation of memory through it.O
: The ordering of bits within a memory element.T
: A memory type used to select both the register size and the access behavior when performing loads/stores.
Usage
This structure is used as the bitvec
equivalent to *bool
. It is used in
all raw-pointer APIs, and provides behavior to emulate raw pointers. It cannot
be directly dereferenced, as it is not a pointer; it can only be transformed
back into higher referential types, or used in bitvec::ptr
free functions.
These pointers can never be null, or misaligned.
Implementations
The dangling pointer. This selects the starting bit of the T
dangling
address.
pub fn try_new<A>(addr: A, head: u8) -> Result<Self, BitPtrError<T>> where
A: TryInto<Address<M, T>>,
BitPtrError<T>: From<A::Error>,
pub fn try_new<A>(addr: A, head: u8) -> Result<Self, BitPtrError<T>> where
A: TryInto<Address<M, T>>,
BitPtrError<T>: From<A::Error>,
Tries to construct a BitPtr
from a memory location and a bit index.
Type Parameters
A
: This accepts anything that may be used as a memory address.
Parameters
addr
: The memory address to use in theBitPtr
. If this value violates theAddress
rules, then its conversion error will be returned.head
: The index of the bit in*addr
that this pointer selects. If this value violates theBitIdx
rules, then its conversion error will be returned.
Returns
A new BitPtr
, selecting the memory location addr
and the bit head
.
If either addr
or head
are invalid values, then this propagates
their error.
Constructs a BitPtr
from a memory location and a bit index.
Since this requires that the address and bit index are already
well-formed, it can assemble the BitPtr
without inspecting their
values.
Parameters
addr
: A well-formed memory address ofT
.head
: A well-formed bit index withinT
.
Returns
A BitPtr
selecting the head
bit in the location addr
.
Decomposes the pointer into its element address and bit index.
Parameters
self
Returns
.0
: The memory address in which the referent bit is located..1
: The index of the referent bit within*.0
.
pub unsafe fn range(self, count: usize) -> BitPtrRange<M, O, T>ⓘNotable traits for BitPtrRange<M, O, T>impl<M, O, T> Iterator for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore, type Item = BitPtr<M, O, T>;
pub unsafe fn range(self, count: usize) -> BitPtrRange<M, O, T>ⓘNotable traits for BitPtrRange<M, O, T>impl<M, O, T> Iterator for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore, type Item = BitPtr<M, O, T>;
impl<M, O, T> Iterator for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore, type Item = BitPtr<M, O, T>;
Produces a pointer range starting at self
and running for count
bits.
This calls self.add(count)
, then bundles the resulting pointer as the
high end of the produced range.
Parameters
self
: The starting pointer of the produced range.count
: The number of bits that the produced range includes.
Returns
A half-open range of pointers, beginning at (and including) self
,
running for count
bits, and ending at (and excluding)
self.add(count)
.
Safety
count
cannot violate the constraints in add
.
Adds write permissions to a bit-pointer.
Safety
This pointer must have been derived from a *mut
pointer.
👎 Deprecated: BitPtr
is never null
BitPtr
is never null
Tests if a bit-pointer is the null value.
This is always false, as BitPtr
is a NonNull
internally. Use
Option<BitPtr>
to express the potential for a null pointer.
Original
Casts to a bit-pointer of another storage type, preserving the bit-ordering and mutability permissions.
Original
Behavior
This is not a free typecast! It encodes the pointer as a crate-internal
span descriptor, casts the span descriptor to the U
storage element
parameter, then decodes the result. This preserves general correctness,
but will likely change both the virtual and physical bits addressed by
this pointer.
Produces a proxy reference to the referent bit.
Because BitPtr
is a non-null, well-aligned, pointer, this never
returns None
.
Original
API Differences
This produces a proxy type rather than a true reference. The proxy
implements Deref<Target = bool>
, and can be converted to &bool
with
&*
.
Safety
Since BitPtr
does not permit null or misaligned pointers, this method
will always dereference the pointer and you must ensure the following
conditions are met:
- the pointer must be dereferencable as defined in the standard library documentation
- the pointer must point to an initialized instance of
T
- you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
Examples
use bitvec::prelude::*;
let data = 1u8;
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
let val = unsafe { ptr.as_ref() }.unwrap();
assert!(*val);
Calculates the offset from a pointer.
count
is in units of bits.
Original
Safety
If any of the following conditions are violated, the result is Undefined Behavior:
- Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.
- The computed offset, in bytes, cannot overflow an
isize
. - The offset being in bounds cannot rely on “wrapping around” the
address space. That is, the infinite-precision sum, in bytes must
fit in a
usize
.
These pointers are almost always derived from BitSlice
regions,
which have an encoding limitation that the high three bits of the length
counter are zero, so bitvec
pointers are even less likely than
ordinary pointers to run afoul of these limitations.
Use wrapping_offset
if you expect to risk hitting the high edge of
the address space.
Examples
use bitvec::prelude::*;
let data = 5u8;
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
assert!(unsafe { ptr.read() });
assert!(!unsafe { ptr.offset(1).read() });
assert!(unsafe { ptr.offset(2).read() });
Calculates the offset from a pointer using wrapping arithmetic.
count
is in units of bits.
Original
Safety
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference.
In particular, the resulting pointer remains attached to the same
allocated object that self
points to. It may not be used to access a
different allocated object. Note that in Rust, every (stack-allocated)
variable is considered a separate allocated object.
In other words, x.wrapping_offset((y as usize).wrapping_sub(x as usize)
is not the same as y
, and dereferencing it is undefined
behavior unless x
and y
point into the same allocated object.
Compared to offset
, this method basically delays the requirement of
staying within the same allocated object: offset
is immediate
Undefined Behavior when crossing object boundaries; wrapping_offset
produces a pointer but still leads to Undefined Behavior if that pointer
is dereferenced. offset
can be optimized better and is thus
preferable in performance-sensitive code.
If you need to cross object boundaries, destructure this pointer into its base address and bit index, cast the base address to an integer, and do the arithmetic in the purely integer space.
Examples
use bitvec::prelude::*;
let data = 0u8;
let mut ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
let end = ptr.wrapping_offset(8);
while ptr < end {
println!("{}", unsafe { ptr.read() });
ptr = ptr.wrapping_offset(3);
}
Calculates the distance between two pointers. The returned value is in units of bits.
This function is the inverse of offset
.
Original
Safety
If any of the following conditions are violated, the result is Undefined Behavior:
- Both the starting and other pointer must be either in bounds or one byte past the end of the same allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.
- Both pointers must be derived from a pointer to the same object.
- The distance between the pointers, in bytes, cannot overflow an
isize
. - The distance being in bounds cannot rely on “wrapping around” the address space.
These pointers are almost always derived from BitSlice
regions,
which have an encoding limitation that the high three bits of the length
counter are zero, so bitvec
pointers are even less likely than
ordinary pointers to run afoul of these limitations.
Examples
Basic usage:
use bitvec::prelude::*;
let data = 0u16;
let base = BitPtr::<_, Lsb0, _>::from_ref(&data);
let low = unsafe { base.add(5) };
let high = unsafe { low.add(6) };
unsafe {
assert_eq!(high.offset_from(low), 6);
assert_eq!(low.offset_from(high), -6);
assert_eq!(low.offset(6), high);
assert_eq!(high.offset(-6), low);
}
Incorrect usage:
use bitvec::prelude::*;
let a = 0u8;
let b = !0u8;
let a_ptr = BitPtr::<_, Lsb0, _>::from_ref(&a);
let b_ptr = BitPtr::<_, Lsb0, _>::from_ref(&b);
let diff = (b_ptr.pointer() as isize)
.wrapping_sub(a_ptr.pointer() as isize)
// Remember: raw pointers are byte-addressed,
// but these are bit-addressed.
.wrapping_mul(8);
// Create a pointer to `b`, derived from `a`.
let b_ptr_2 = a_ptr.wrapping_offset(diff);
// The pointers are *arithmetically* equal now
assert_eq!(b_ptr, b_ptr_2);
// Undefined Behavior!
unsafe {
b_ptr_2.offset_from(b_ptr);
}
Calculates the offset from a pointer using wrapping arithmetic
(convenience for .wrapping_offset(count as isize)
).
Original
Safety
See wrapping_offset
.
Calculates the offset from a pointer using wrapping arithmetic
(convenience for .wrapping_offset((count as isize).wrapping_neg())
).
Original
Safety
See wrapping_offset
.
Performs a volatile read of the bit from self
.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reördered by the compiler across other volatile operations.
Original
Safety
See ptr::read_volatile
for safety concerns and examples.
Copies count
bits from self
to dest
. The source and destination
may not overlap.
NOTE: this has the same argument order as
ptr::copy_nonoverlapping
.
Original
pointer::copy_to_nonoverlapping
Safety
See ptr::copy_nonoverlapping
for safety concerns and examples.
Computes the offset (in bits) that needs to be applied to the pointer in
order to make it aligned to align
.
“Alignment” here means that the pointer is selecting the start bit of a memory location whose address satisfies the requested alignment.
align
is measured in bytes. If you wish to align your bit-pointer
to a specific fraction (½, ¼, or ⅛ of one byte), please file an issue
and this functionality will be added to BitIdx
.
Original
If the base-element address of the pointer is already aligned to
align
, then this will return the bit-offset required to select the
first bit of the successor element.
If it is not possible to align the pointer, the implementation returns
usize::MAX
. It is permissible for the implementation to always
return usize::MAX
. Only your algorithm’s performance can depend on
getting a usable offset here, not its correctness.
The offset is expressed in number of bits, and not T
elements or
bytes. The value returned can be used with the wrapping_add
method.
Safety
There are no guarantees whatsoëver that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.
Panics
The function panics if align
is not a power-of-two.
Examples
use bitvec::prelude::*;
let data = [0u8; 3];
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data[0]);
let ptr = unsafe { ptr.add(2) };
let count = ptr.align_offset(2);
assert!(count > 0);
Constructs a BitPtr
from an element reference.
Parameters
elem
: A borrowed memory element.
Returns
A read-only bit-pointer to the zeroth bit in the *elem
location.
Attempts to construct a BitPtr
from an element location.
Parameters
elem
: A read-only element address.
Returns
A read-only bit-pointer to the zeroth bit in the *elem
location, if
elem
is well-formed.
Constructs a BitPtr
from a slice reference.
This differs from from_ref
in that the returned pointer keeps its
provenance over the entire slice, whereas producing a pointer to the
base bit of a slice with BitPtr::from_ref(&slice[0])
narrows its
provenance to only the slice[0]
element, and calling add
to leave
that element, even remaining in the slice, may cause UB.
Parameters
slice
: An immutabily borrowed slice of memory.
Returns
A read-only bit-pointer to the zeroth bit in the base location of the slice.
This pointer has provenance over the entire slice
, and may safely use
add
to traverse memory elements as long as it stays within the
slice.
Constructs a BitPtr
from an element reference.
Parameters
elem
: A mutably borrowed memory element.
Returns
A write-capable bit-pointer to the zeroth bit in the *elem
location.
Note that even if elem
is an address within a contiguous array or
slice, the returned bit-pointer only has provenance for the elem
location, and no other.
Safety
The exclusive borrow of elem
is released after this function returns.
However, you must not use any other pointer than that returned by this
function to view or modify *elem
, unless the T
type supports aliased
mutation.
Attempts to construct a BitPtr
from an element location.
Parameters
elem
: A write-capable element address.
Returns
A write-capable bit-pointer to the zeroth bit in the *elem
location,
if elem
is well-formed.
Constructs a BitPtr
from a slice reference.
This differs from from_mut
in that the returned pointer keeps its
provenance over the entire slice, whereas producing a pointer to the
base bit of a slice with BitPtr::from_mut(&mut slice[0])
narrows its
provenance to only the slice[0]
element, and calling add
to leave
that element, even remaining in the slice, may cause UB.
Parameters
slice
: A mutabily borrowed slice of memory.
Returns
A write-capable bit-pointer to the zeroth bit in the base location of the slice.
This pointer has provenance over the entire slice
, and may safely use
add
to traverse memory elements as long as it stays within the
slice.
Gets the pointer to the base memory location containing the referent bit.
Produces a proxy mutable reference to the referent bit.
Because BitPtr
is a non-null, well-aligned, pointer, this never
returns None
.
Original
API Differences
This produces a proxy type rather than a true reference. The proxy
implements DerefMut<Target = bool>
, and can be converted to &mut bool
with &mut *
. Writes to the proxy are not reflected in the
proxied location until the proxy is destroyed, either through Drop
or
with its set
method.
The proxy must be bound as mut
in order to write through the binding.
Safety
Since BitPtr
does not permit null or misaligned pointers, this method
will always dereference the pointer and you must ensure the following
conditions are met:
- the pointer must be dereferencable as defined in the standard library documentation
- the pointer must point to an initialized instance of
T
- you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
Examples
use bitvec::prelude::*;
let mut data = 0u8;
let ptr = BitPtr::<_, Lsb0, _>::from_mut(&mut data);
let mut val = unsafe { ptr.as_mut() }.unwrap();
assert!(!*val);
*val = true;
assert!(*val);
Copies count
bits from src
to self
. The source and destination may
not overlap.
NOTE: this has the opposite argument order of
ptr::copy_nonoverlapping
.
Original
pointer::copy_from_nonoverlapping
Safety
See ptr::copy_nonoverlapping
for safety concerns and examples.
Overwrites a memory location with the given bit.
See ptr::write
for safety concerns and examples.
Original
Performs a volatile write of a memory location with the given bit.
Because processors do not have single-bit write instructions, this must perform a volatile read of the location, perform the bit modification within the processor register, and then perform a volatile write back to memory. These three steps are guaranteed to be sequential, but are not guaranteed to be atomic.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reördered by the compiler across other volatile operations.
Original
Safety
See ptr::write_volatile
for safety concerns and examples.
Replaces the bit at *self
with src
, returning the old bit.
Original
Safety
See ptr::replace
for safety concerns and examples.
Trait Implementations
This method returns an ordering between self
and other
values if one exists. Read more
This method tests less than (for self
and other
) and is used by the <
operator. Read more
This method tests less than or equal to (for self
and other
) and is used by the <=
operator. Read more
This method tests greater than (for self
and other
) and is used by the >
operator. Read more
impl<M, O, T> RangeBounds<BitPtr<M, O, T>> for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
impl<M, O, T> RangeBounds<BitPtr<M, O, T>> for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
Start index bound. Read more
1.35.0 · sourcefn contains<U>(&self, item: &U) -> bool where
T: PartialOrd<U>,
U: PartialOrd<T> + ?Sized,
fn contains<U>(&self, item: &U) -> bool where
T: PartialOrd<U>,
U: PartialOrd<T> + ?Sized,
Returns true
if item
is contained in the range. Read more
Auto Trait Implementations
impl<M, O, T> RefUnwindSafe for BitPtr<M, O, T> where
M: RefUnwindSafe,
O: RefUnwindSafe,
T: RefUnwindSafe,
<T as BitStore>::Mem: RefUnwindSafe,
impl<M, O, T> UnwindSafe for BitPtr<M, O, T> where
M: UnwindSafe,
O: UnwindSafe,
T: RefUnwindSafe,
<T as BitStore>::Mem: UnwindSafe,
Blanket Implementations
Mutably borrows from an owned value. Read more
Causes self
to use its Binary
implementation when Debug
-formatted.
Causes self
to use its Display
implementation when
Debug
-formatted. Read more
Causes self
to use its LowerExp
implementation when
Debug
-formatted. Read more
Causes self
to use its LowerHex
implementation when
Debug
-formatted. Read more
Causes self
to use its Octal
implementation when Debug
-formatted.
Causes self
to use its Pointer
implementation when
Debug
-formatted. Read more
Causes self
to use its UpperExp
implementation when
Debug
-formatted. Read more
Causes self
to use its UpperHex
implementation when
Debug
-formatted. Read more
Pipes by value. This is generally the method you want to use. Read more
Borrows self
and passes that borrow into the pipe function. Read more
Mutably borrows self
and passes that borrow into the pipe function. Read more
Borrows self
, then passes self.borrow()
into the pipe function. Read more
Mutably borrows self
, then passes self.borrow_mut()
into the pipe
function. Read more
Borrows self
, then passes self.as_ref()
into the pipe function.
Mutably borrows self
, then passes self.as_mut()
into the pipe
function. Read more
Borrows self
, then passes self.deref()
into the pipe function.
fn pipe_as_ref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
Self: AsRef<T>,
T: 'a,
R: 'a,
fn pipe_as_ref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
Self: AsRef<T>,
T: 'a,
R: 'a,
Pipes a trait borrow into a function that cannot normally be called in suffix position. Read more
fn pipe_borrow<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
Self: Borrow<T>,
T: 'a,
R: 'a,
fn pipe_borrow<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
Self: Borrow<T>,
T: 'a,
R: 'a,
Pipes a trait borrow into a function that cannot normally be called in suffix position. Read more
fn pipe_deref<'a, R>(&'a self, func: impl FnOnce(&'a Self::Target) -> R) -> R where
Self: Deref,
R: 'a,
fn pipe_deref<'a, R>(&'a self, func: impl FnOnce(&'a Self::Target) -> R) -> R where
Self: Deref,
R: 'a,
Pipes a dereference into a function that cannot normally be called in suffix position. Read more
Pipes a reference into a function that cannot ordinarily be called in suffix position. Read more
Immutable access to the Borrow<B>
of a value. Read more
Mutable access to the BorrowMut<B>
of a value. Read more
Immutable access to the AsRef<R>
view of a value. Read more
Mutable access to the AsMut<R>
view of a value. Read more
Immutable access to the Deref::Target
of a value. Read more
Mutable access to the Deref::Target
of a value. Read more
Calls .tap()
only in debug builds, and is erased in release builds.
Calls .tap_mut()
only in debug builds, and is erased in release
builds. Read more
Calls .tap_borrow()
only in debug builds, and is erased in release
builds. Read more
Calls .tap_borrow_mut()
only in debug builds, and is erased in release
builds. Read more
Calls .tap_ref()
only in debug builds, and is erased in release
builds. Read more
Calls .tap_ref_mut()
only in debug builds, and is erased in release
builds. Read more
Calls .tap_deref()
only in debug builds, and is erased in release
builds. Read more
Provides immutable access to the reference for inspection.
Calls tap_ref
in debug builds, and does nothing in release builds.
Provides mutable access to the reference for modification.
Calls tap_ref_mut
in debug builds, and does nothing in release builds.
Provides immutable access to the borrow for inspection. Read more
Calls tap_borrow
in debug builds, and does nothing in release builds.
fn tap_borrow_mut<F, R>(self, func: F) -> Self where
Self: BorrowMut<T>,
F: FnOnce(&mut T) -> R,
fn tap_borrow_mut<F, R>(self, func: F) -> Self where
Self: BorrowMut<T>,
F: FnOnce(&mut T) -> R,
Provides mutable access to the borrow for modification.
Immutably dereferences self
for inspection.
fn tap_deref_dbg<F, R>(self, func: F) -> Self where
Self: Deref,
F: FnOnce(&Self::Target) -> R,
fn tap_deref_dbg<F, R>(self, func: F) -> Self where
Self: Deref,
F: FnOnce(&Self::Target) -> R,
Calls tap_deref
in debug builds, and does nothing in release builds.
fn tap_deref_mut<F, R>(self, func: F) -> Self where
Self: DerefMut,
F: FnOnce(&mut Self::Target) -> R,
fn tap_deref_mut<F, R>(self, func: F) -> Self where
Self: DerefMut,
F: FnOnce(&mut Self::Target) -> R,
Mutably dereferences self
for modification.