Primitive Type pointer

1.0.0 ·
Expand description

Raw, unsafe pointers, *const T, and *mut T.

See also the std::ptr module.

Working with raw pointers in Rust is uncommon, typically limited to a few patterns. Raw pointers can be unaligned or null. However, when a raw pointer is dereferenced (using the * operator), it must be non-null and aligned.

Storing through a raw pointer using *ptr = data calls drop on the old value, so write must be used if the type has drop glue and memory is not already initialized - otherwise drop would be called on the uninitialized memory.

Use the null and null_mut functions to create null pointers, and the is_null method of the *const T and *mut T types to check for null. The *const T and *mut T types also define the offset method, for pointer math.

§Common ways to create raw pointers

§1. Coerce a reference (&T) or mutable reference (&mut T).

let my_num: i32 = 10;
let my_num_ptr: *const i32 = &my_num;
let mut my_speed: i32 = 88;
let my_speed_ptr: *mut i32 = &mut my_speed;
Run

To get a pointer to a boxed value, dereference the box:

let my_num: Box<i32> = Box::new(10);
let my_num_ptr: *const i32 = &*my_num;
let mut my_speed: Box<i32> = Box::new(88);
let my_speed_ptr: *mut i32 = &mut *my_speed;
Run

This does not take ownership of the original allocation and requires no resource management later, but you must not use the pointer after its lifetime.

§2. Consume a box (Box<T>).

The into_raw function consumes a box and returns the raw pointer. It doesn’t destroy T or deallocate any memory.

let my_speed: Box<i32> = Box::new(88);
let my_speed: *mut i32 = Box::into_raw(my_speed);

// By taking ownership of the original `Box<T>` though
// we are obligated to put it together later to be destroyed.
unsafe {
    drop(Box::from_raw(my_speed));
}
Run

Note that here the call to drop is for clarity - it indicates that we are done with the given value and it should be destroyed.

§3. Create it using ptr::addr_of!

Instead of coercing a reference to a raw pointer, you can use the macros ptr::addr_of! (for *const T) and ptr::addr_of_mut! (for *mut T). These macros allow you to create raw pointers to fields to which you cannot create a reference (without causing undefined behaviour), such as an unaligned field. This might be necessary if packed structs or uninitialized memory is involved.

#[derive(Debug, Default, Copy, Clone)]
#[repr(C, packed)]
struct S {
    aligned: u8,
    unaligned: u32,
}
let s = S::default();
let p = std::ptr::addr_of!(s.unaligned); // not allowed with coercion
Run

§4. Get it from C.

#[allow(unused_extern_crates)]
extern crate libc;

use std::mem;

unsafe {
    let my_num: *mut i32 = libc::malloc(mem::size_of::<i32>()) as *mut i32;
    if my_num.is_null() {
        panic!("failed to allocate memory");
    }
    libc::free(my_num as *mut core::ffi::c_void);
}
Run

Usually you wouldn’t literally use malloc and free from Rust, but C APIs hand out a lot of pointers generally, so are a common source of raw pointers in Rust.

Implementations§

source§

impl<T: ?Sized> *const T

1.0.0 (const: unstable) · source

pub fn is_null(self) -> bool

Returns true if the pointer is null.

Note that unsized types have many possible null pointers, as only the raw data pointer is considered, not their length, vtable, etc. Therefore, two pointers that are null may still not compare equal to each other.

§Behavior during const evaluation

When this function is used during const evaluation, it may return false for pointers that turn out to be null at runtime. Specifically, when a pointer to some memory is offset beyond its bounds in such a way that the resulting pointer is null, the function will still return false. There is no way for CTFE to know the absolute position of that memory, so we cannot tell if the pointer is null or not.

§Examples
let s: &str = "Follow the rabbit";
let ptr: *const u8 = s.as_ptr();
assert!(!ptr.is_null());
Run
1.38.0 (const: 1.38.0) · source

pub const fn cast<U>(self) -> *const U

Casts to a pointer of another type.

source

pub const fn with_metadata_of<U>(self, meta: *const U) -> *const U
where U: ?Sized,

🔬This is a nightly-only experimental API. (set_ptr_value #75091)

Use the pointer value in a new pointer of another type.

In case meta is a (fat) pointer to an unsized type, this operation will ignore the pointer part, whereas for (thin) pointers to sized types, this has the same effect as a simple cast.

The resulting pointer will have provenance of self, i.e., for a fat pointer, this operation is semantically the same as creating a new fat pointer with the data pointer value of self but the metadata of meta.

§Examples

This function is primarily useful for allowing byte-wise pointer arithmetic on potentially fat pointers:

#![feature(set_ptr_value)]
let arr: [i32; 3] = [1, 2, 3];
let mut ptr = arr.as_ptr() as *const dyn Debug;
let thin = ptr as *const u8;
unsafe {
    ptr = thin.add(8).with_metadata_of(ptr);
    println!("{:?}", &*ptr); // will print "3"
}
Run
1.65.0 (const: 1.65.0) · source

pub const fn cast_mut(self) -> *mut T

Changes constness without changing the type.

This is a bit safer than as because it wouldn’t silently change the type if the code is refactored.

source

pub fn addr(self) -> usize

🔬This is a nightly-only experimental API. (strict_provenance #95228)

Gets the “address” portion of the pointer.

This is similar to self as usize, which semantically discards provenance and address-space information. However, unlike self as usize, casting the returned address back to a pointer yields a pointer without provenance, which is undefined behavior to dereference. To properly restore the lost information and obtain a dereferenceable pointer, use with_addr or map_addr.

If using those APIs is not possible because there is no way to preserve a pointer with the required provenance, then Strict Provenance might not be for you. Use pointer-integer casts or expose_provenance and with_exposed_provenance instead. However, note that this makes your code less portable and less amenable to tools that check for compliance with the Rust memory model.

On most platforms this will produce a value with the same bytes as the original pointer, because all the bytes are dedicated to describing the address. Platforms which need to store additional information in the pointer may perform a change of representation to produce a value containing only the address portion of the pointer. What that means is up to the platform to define.

This API and its claimed semantics are part of the Strict Provenance experiment, and as such might change in the future (including possibly weakening this so it becomes wholly equivalent to self as usize). See the module documentation for details.

source

pub fn expose_provenance(self) -> usize

🔬This is a nightly-only experimental API. (exposed_provenance #95228)

Exposes the “provenance” part of the pointer for future use in with_exposed_provenance and returns the “address” portion.

This is equivalent to self as usize, which semantically discards provenance and address-space information. Furthermore, this (like the as cast) has the implicit side-effect of marking the provenance as ‘exposed’, so on platforms that support it you can later call with_exposed_provenance to reconstitute the original pointer including its provenance. (Reconstructing address space information, if required, is your responsibility.)

Using this method means that code is not following Strict Provenance rules. Supporting with_exposed_provenance complicates specification and reasoning and may not be supported by tools that help you to stay conformant with the Rust memory model, so it is recommended to use addr wherever possible.

On most platforms this will produce a value with the same bytes as the original pointer, because all the bytes are dedicated to describing the address. Platforms which need to store additional information in the pointer may not support this operation, since the ‘expose’ side-effect which is required for with_exposed_provenance to work is typically not available.

It is unclear whether this method can be given a satisfying unambiguous specification. This API and its claimed semantics are part of Exposed Provenance.

source

pub fn with_addr(self, addr: usize) -> Self

🔬This is a nightly-only experimental API. (strict_provenance #95228)

Creates a new pointer with the given address.

This performs the same operation as an addr as ptr cast, but copies the address-space and provenance of self to the new pointer. This allows us to dynamically preserve and propagate this important information in a way that is otherwise impossible with a unary cast.

This is equivalent to using wrapping_offset to offset self to the given address, and therefore has all the same capabilities and restrictions.

This API and its claimed semantics are part of the Strict Provenance experiment, see the module documentation for details.

source

pub fn map_addr(self, f: impl FnOnce(usize) -> usize) -> Self

🔬This is a nightly-only experimental API. (strict_provenance #95228)

Creates a new pointer by mapping self’s address to a new one.

This is a convenience for with_addr, see that method for details.

This API and its claimed semantics are part of the Strict Provenance experiment, see the module documentation for details.

source

pub const fn to_raw_parts(self) -> (*const (), <T as Pointee>::Metadata)

🔬This is a nightly-only experimental API. (ptr_metadata #81513)

Decompose a (possibly wide) pointer into its data pointer and metadata components.

The pointer can be later reconstructed with from_raw_parts.

1.9.0 (const: unstable) · source

pub unsafe fn as_ref<'a>(self) -> Option<&'a T>

Returns None if the pointer is null, or else returns a shared reference to the value wrapped in Some. If the value may be uninitialized, as_uninit_ref must be used instead.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • The pointer must point to an initialized instance of T.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)

§Examples
let ptr: *const u8 = &10u8 as *const u8;

unsafe {
    if let Some(val_back) = ptr.as_ref() {
        assert_eq!(val_back, &10);
    }
}
Run
§Null-unchecked version

If you are sure the pointer can never be null and are looking for some kind of as_ref_unchecked that returns the &T instead of Option<&T>, know that you can dereference the pointer directly.

let ptr: *const u8 = &10u8 as *const u8;

unsafe {
    let val_back = &*ptr;
    assert_eq!(val_back, &10);
}
Run
source

pub const unsafe fn as_ref_unchecked<'a>(self) -> &'a T

🔬This is a nightly-only experimental API. (ptr_as_ref_unchecked #122034)

Returns a shared reference to the value behind the pointer. If the pointer may be null or the value may be uninitialized, as_uninit_ref must be used instead. If the pointer may be null, but the value is known to have been initialized, as_ref must be used instead.

§Safety

When calling this method, you have to ensure that all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • The pointer must point to an initialized instance of T.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)

§Examples
#![feature(ptr_as_ref_unchecked)]
let ptr: *const u8 = &10u8 as *const u8;

unsafe {
    assert_eq!(ptr.as_ref_unchecked(), &10);
}
Run
source

pub const unsafe fn as_uninit_ref<'a>(self) -> Option<&'a MaybeUninit<T>>
where T: Sized,

🔬This is a nightly-only experimental API. (ptr_as_uninit #75402)

Returns None if the pointer is null, or else returns a shared reference to the value wrapped in Some. In contrast to as_ref, this does not require that the value has to be initialized.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused!

§Examples
#![feature(ptr_as_uninit)]

let ptr: *const u8 = &10u8 as *const u8;

unsafe {
    if let Some(val_back) = ptr.as_uninit_ref() {
        assert_eq!(val_back.assume_init(), 10);
    }
}
Run
1.0.0 (const: 1.61.0) · source

pub const unsafe fn offset(self, count: isize) -> *const T
where T: Sized,

Adds an offset to a pointer.

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • The computed offset, count * size_of::<T>() bytes, must not overflow isize.

  • If the computed offset is non-zero, then self must be derived from a pointer to some allocated object, and the entire memory range between self and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.

Allocated objects can never be larger than isize::MAX bytes, so if the computed offset stays in bounds of the allocated object, it is guaranteed to satisfy the first requirement. This implies, for instance, that vec.as_ptr().add(vec.len()) (for vec: Vec<T>) is always safe.

Consider using wrapping_offset instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

§Examples
let s: &str = "123";
let ptr: *const u8 = s.as_ptr();

unsafe {
    assert_eq!(*ptr.offset(1) as char, '2');
    assert_eq!(*ptr.offset(2) as char, '3');
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_offset(self, count: isize) -> Self

Calculates the offset from a pointer in bytes.

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using offset on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.16.0 (const: 1.61.0) · source

pub const fn wrapping_offset(self, count: isize) -> *const T
where T: Sized,

Calculates the offset from a pointer using wrapping arithmetic.

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

This operation itself is always safe, but using the resulting pointer is not.

The resulting pointer “remembers” the allocated object that self points to; it must not be used to read or write other allocated objects.

In other words, let z = x.wrapping_offset((y as isize) - (x as isize)) does not make z the same as y even if we assume T has size 1 and there is no overflow: z is still attached to the object x is attached to, and dereferencing it is Undefined Behavior unless x and y point into the same allocated object.

Compared to offset, this method basically delays the requirement of staying within the same allocated object: offset is immediate Undefined Behavior when crossing object boundaries; wrapping_offset produces a pointer but still leads to Undefined Behavior if a pointer is dereferenced when it is out-of-bounds of the object it is attached to. offset can be optimized better and is thus preferable in performance-sensitive code.

The delayed check only considers the value of the pointer that was dereferenced, not the intermediate values used during the computation of the final result. For example, x.wrapping_offset(o).wrapping_offset(o.wrapping_neg()) is always the same as x. In other words, leaving the allocated object and then re-entering it later is permitted.

§Examples
// Iterate using a raw pointer in increments of two elements
let data = [1u8, 2, 3, 4, 5];
let mut ptr: *const u8 = data.as_ptr();
let step = 2;
let end_rounded_up = ptr.wrapping_offset(6);

let mut out = String::new();
while ptr != end_rounded_up {
    unsafe {
        write!(&mut out, "{}, ", *ptr).unwrap();
    }
    ptr = ptr.wrapping_offset(step);
}
assert_eq!(out.as_str(), "1, 3, 5, ");
Run
1.75.0 (const: 1.75.0) · source

pub const fn wrapping_byte_offset(self, count: isize) -> Self

Calculates the offset from a pointer in bytes using wrapping arithmetic.

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using wrapping_offset on it. See that method for documentation.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

source

pub fn mask(self, mask: usize) -> *const T

🔬This is a nightly-only experimental API. (ptr_mask #98290)

Masks out bits of the pointer according to a mask.

This is convenience for ptr.map_addr(|a| a & mask).

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

§Examples
#![feature(ptr_mask, strict_provenance)]
let v = 17_u32;
let ptr: *const u32 = &v;

// `u32` is 4 bytes aligned,
// which means that lower 2 bits are always 0.
let tag_mask = 0b11;
let ptr_mask = !tag_mask;

// We can store something in these lower bits
let tagged_ptr = ptr.map_addr(|a| a | 0b10);

// Get the "tag" back
let tag = tagged_ptr.addr() & tag_mask;
assert_eq!(tag, 0b10);

// Note that `tagged_ptr` is unaligned, it's UB to read from it.
// To get original pointer `mask` can be used:
let masked_ptr = tagged_ptr.mask(ptr_mask);
assert_eq!(unsafe { *masked_ptr }, 17);
Run
1.47.0 (const: 1.65.0) · source

pub const unsafe fn offset_from(self, origin: *const T) -> isize
where T: Sized,

Calculates the distance between two pointers. The returned value is in units of T: the distance in bytes divided by mem::size_of::<T>().

This is equivalent to (self as isize - origin as isize) / (mem::size_of::<T>() as isize), except that it has a lot more opportunities for UB, in exchange for the compiler better understanding what you are doing.

The primary motivation of this method is for computing the len of an array/slice of T that you are currently representing as a “start” and “end” pointer (and “end” is “one past the end” of the array). In that case, end.offset_from(start) gets you the length of the array.

All of the following safety requirements are trivially satisfied for this usecase.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • self and origin must either

    • point to the same address, or
    • both be derived from a pointer to the same allocated object, and the memory range between the two pointers must be in bounds of that object. (See below for an example.)
  • The distance between the pointers, in bytes, must be an exact multiple of the size of T.

As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without “wrapping around”), cannot overflow an isize. This is implied by the in-bounds requirement, and the fact that no allocated object can be larger than isize::MAX bytes.

The requirement for pointers to be derived from the same allocated object is primarily needed for const-compatibility: the distance between pointers into different allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use (self as isize - origin as isize) / mem::size_of::<T>().

§Panics

This function panics if T is a Zero-Sized Type (“ZST”).

§Examples

Basic usage:

let a = [0; 5];
let ptr1: *const i32 = &a[1];
let ptr2: *const i32 = &a[3];
unsafe {
    assert_eq!(ptr2.offset_from(ptr1), 2);
    assert_eq!(ptr1.offset_from(ptr2), -2);
    assert_eq!(ptr1.offset(2), ptr2);
    assert_eq!(ptr2.offset(-2), ptr1);
}
Run

Incorrect usage:

let ptr1 = Box::into_raw(Box::new(0u8)) as *const u8;
let ptr2 = Box::into_raw(Box::new(1u8)) as *const u8;
let diff = (ptr2 as isize).wrapping_sub(ptr1 as isize);
// Make ptr2_other an "alias" of ptr2.add(1), but derived from ptr1.
let ptr2_other = (ptr1 as *const u8).wrapping_offset(diff).wrapping_offset(1);
assert_eq!(ptr2 as usize, ptr2_other as usize);
// Since ptr2_other and ptr2 are derived from pointers to different objects,
// computing their offset is undefined behavior, even though
// they point to addresses that are in-bounds of the same object!
unsafe {
    let one = ptr2_other.offset_from(ptr2); // Undefined Behavior! ⚠️
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_offset_from<U: ?Sized>(self, origin: *const U) -> isize

Calculates the distance between two pointers. The returned value is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using offset_from on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation considers only the data pointers, ignoring the metadata.

source

pub const unsafe fn sub_ptr(self, origin: *const T) -> usize
where T: Sized,

🔬This is a nightly-only experimental API. (ptr_sub_ptr #95892)

Calculates the distance between two pointers, where it’s known that self is equal to or greater than origin. The returned value is in units of T: the distance in bytes is divided by mem::size_of::<T>().

This computes the same value that offset_from would compute, but with the added precondition that the offset is guaranteed to be non-negative. This method is equivalent to usize::try_from(self.offset_from(origin)).unwrap_unchecked(), but it provides slightly more information to the optimizer, which can sometimes allow it to optimize slightly better with some backends.

This method can be though of as recovering the count that was passed to add (or, with the parameters in the other order, to sub). The following are all equivalent, assuming that their safety preconditions are met:

ptr.sub_ptr(origin) == count
origin.add(count) == ptr
ptr.sub(count) == origin
Run
§Safety
  • The distance between the pointers must be non-negative (self >= origin)

  • All the safety conditions of offset_from apply to this method as well; see it for the full details.

Importantly, despite the return type of this method being able to represent a larger offset, it’s still not permitted to pass pointers which differ by more than isize::MAX bytes. As such, the result of this method will always be less than or equal to isize::MAX as usize.

§Panics

This function panics if T is a Zero-Sized Type (“ZST”).

§Examples
#![feature(ptr_sub_ptr)]

let a = [0; 5];
let ptr1: *const i32 = &a[1];
let ptr2: *const i32 = &a[3];
unsafe {
    assert_eq!(ptr2.sub_ptr(ptr1), 2);
    assert_eq!(ptr1.add(2), ptr2);
    assert_eq!(ptr2.sub(2), ptr1);
    assert_eq!(ptr2.sub_ptr(ptr2), 0);
}

// This would be incorrect, as the pointers are not correctly ordered:
// ptr1.sub_ptr(ptr2)
Run
source

pub const fn guaranteed_eq(self, other: *const T) -> Option<bool>
where T: Sized,

🔬This is a nightly-only experimental API. (const_raw_ptr_comparison #53020)

Returns whether two pointers are guaranteed to be equal.

At runtime this function behaves like Some(self == other). However, in some contexts (e.g., compile-time evaluation), it is not always possible to determine equality of two pointers, so this function may spuriously return None for pointers that later actually turn out to have its equality known. But when it returns Some, the pointers’ equality is guaranteed to be known.

The return value may change from Some to None and vice versa depending on the compiler version and unsafe code must not rely on the result of this function for soundness. It is suggested to only use this function for performance optimizations where spurious None return values by this function do not affect the outcome, but just the performance. The consequences of using this method to make runtime and compile-time code behave differently have not been explored. This method should not be used to introduce such differences, and it should also not be stabilized before we have a better understanding of this issue.

source

pub const fn guaranteed_ne(self, other: *const T) -> Option<bool>
where T: Sized,

🔬This is a nightly-only experimental API. (const_raw_ptr_comparison #53020)

Returns whether two pointers are guaranteed to be inequal.

At runtime this function behaves like Some(self != other). However, in some contexts (e.g., compile-time evaluation), it is not always possible to determine inequality of two pointers, so this function may spuriously return None for pointers that later actually turn out to have its inequality known. But when it returns Some, the pointers’ inequality is guaranteed to be known.

The return value may change from Some to None and vice versa depending on the compiler version and unsafe code must not rely on the result of this function for soundness. It is suggested to only use this function for performance optimizations where spurious None return values by this function do not affect the outcome, but just the performance. The consequences of using this method to make runtime and compile-time code behave differently have not been explored. This method should not be used to introduce such differences, and it should also not be stabilized before we have a better understanding of this issue.

1.26.0 (const: 1.61.0) · source

pub const unsafe fn add(self, count: usize) -> Self
where T: Sized,

Adds an offset to a pointer (convenience for .offset(count as isize)).

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • The computed offset, count * size_of::<T>() bytes, must not overflow isize.

  • If the computed offset is non-zero, then self must be derived from a pointer to some allocated object, and the entire memory range between self and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.

Allocated objects can never be larger than isize::MAX bytes, so if the computed offset stays in bounds of the allocated object, it is guaranteed to satisfy the first requirement. This implies, for instance, that vec.as_ptr().add(vec.len()) (for vec: Vec<T>) is always safe.

Consider using wrapping_add instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

§Examples
let s: &str = "123";
let ptr: *const u8 = s.as_ptr();

unsafe {
    assert_eq!(*ptr.add(1), b'2');
    assert_eq!(*ptr.add(2), b'3');
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_add(self, count: usize) -> Self

Calculates the offset from a pointer in bytes (convenience for .byte_offset(count as isize)).

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using add on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.61.0) · source

pub const unsafe fn sub(self, count: usize) -> Self
where T: Sized,

Subtracts an offset from a pointer (convenience for .offset((count as isize).wrapping_neg())).

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • The computed offset, count * size_of::<T>() bytes, must not overflow isize.

  • If the computed offset is non-zero, then self must be derived from a pointer to some allocated object, and the entire memory range between self and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.

Allocated objects can never be larger than isize::MAX bytes, so if the computed offset stays in bounds of the allocated object, it is guaranteed to satisfy the first requirement. This implies, for instance, that vec.as_ptr().add(vec.len()) (for vec: Vec<T>) is always safe.

Consider using wrapping_sub instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

§Examples
let s: &str = "123";

unsafe {
    let end: *const u8 = s.as_ptr().add(3);
    assert_eq!(*end.sub(1), b'3');
    assert_eq!(*end.sub(2), b'2');
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_sub(self, count: usize) -> Self

Calculates the offset from a pointer in bytes (convenience for .byte_offset((count as isize).wrapping_neg())).

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using sub on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.61.0) · source

pub const fn wrapping_add(self, count: usize) -> Self
where T: Sized,

Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset(count as isize))

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

This operation itself is always safe, but using the resulting pointer is not.

The resulting pointer “remembers” the allocated object that self points to; it must not be used to read or write other allocated objects.

In other words, let z = x.wrapping_add((y as usize) - (x as usize)) does not make z the same as y even if we assume T has size 1 and there is no overflow: z is still attached to the object x is attached to, and dereferencing it is Undefined Behavior unless x and y point into the same allocated object.

Compared to add, this method basically delays the requirement of staying within the same allocated object: add is immediate Undefined Behavior when crossing object boundaries; wrapping_add produces a pointer but still leads to Undefined Behavior if a pointer is dereferenced when it is out-of-bounds of the object it is attached to. add can be optimized better and is thus preferable in performance-sensitive code.

The delayed check only considers the value of the pointer that was dereferenced, not the intermediate values used during the computation of the final result. For example, x.wrapping_add(o).wrapping_sub(o) is always the same as x. In other words, leaving the allocated object and then re-entering it later is permitted.

§Examples
// Iterate using a raw pointer in increments of two elements
let data = [1u8, 2, 3, 4, 5];
let mut ptr: *const u8 = data.as_ptr();
let step = 2;
let end_rounded_up = ptr.wrapping_add(6);

let mut out = String::new();
while ptr != end_rounded_up {
    unsafe {
        write!(&mut out, "{}, ", *ptr).unwrap();
    }
    ptr = ptr.wrapping_add(step);
}
assert_eq!(out, "1, 3, 5, ");
Run
1.75.0 (const: 1.75.0) · source

pub const fn wrapping_byte_add(self, count: usize) -> Self

Calculates the offset from a pointer in bytes using wrapping arithmetic. (convenience for .wrapping_byte_offset(count as isize))

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using wrapping_add on it. See that method for documentation.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.61.0) · source

pub const fn wrapping_sub(self, count: usize) -> Self
where T: Sized,

Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset((count as isize).wrapping_neg()))

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

This operation itself is always safe, but using the resulting pointer is not.

The resulting pointer “remembers” the allocated object that self points to; it must not be used to read or write other allocated objects.

In other words, let z = x.wrapping_sub((x as usize) - (y as usize)) does not make z the same as y even if we assume T has size 1 and there is no overflow: z is still attached to the object x is attached to, and dereferencing it is Undefined Behavior unless x and y point into the same allocated object.

Compared to sub, this method basically delays the requirement of staying within the same allocated object: sub is immediate Undefined Behavior when crossing object boundaries; wrapping_sub produces a pointer but still leads to Undefined Behavior if a pointer is dereferenced when it is out-of-bounds of the object it is attached to. sub can be optimized better and is thus preferable in performance-sensitive code.

The delayed check only considers the value of the pointer that was dereferenced, not the intermediate values used during the computation of the final result. For example, x.wrapping_add(o).wrapping_sub(o) is always the same as x. In other words, leaving the allocated object and then re-entering it later is permitted.

§Examples
// Iterate using a raw pointer in increments of two elements (backwards)
let data = [1u8, 2, 3, 4, 5];
let mut ptr: *const u8 = data.as_ptr();
let start_rounded_down = ptr.wrapping_sub(2);
ptr = ptr.wrapping_add(4);
let step = 2;
let mut out = String::new();
while ptr != start_rounded_down {
    unsafe {
        write!(&mut out, "{}, ", *ptr).unwrap();
    }
    ptr = ptr.wrapping_sub(step);
}
assert_eq!(out, "5, 3, 1, ");
Run
1.75.0 (const: 1.75.0) · source

pub const fn wrapping_byte_sub(self, count: usize) -> Self

Calculates the offset from a pointer in bytes using wrapping arithmetic. (convenience for .wrapping_offset((count as isize).wrapping_neg()))

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using wrapping_sub on it. See that method for documentation.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.71.0) · source

pub const unsafe fn read(self) -> T
where T: Sized,

Reads the value from self without moving it. This leaves the memory in self unchanged.

See ptr::read for safety concerns and examples.

1.26.0 · source

pub unsafe fn read_volatile(self) -> T
where T: Sized,

Performs a volatile read of the value from self without moving it. This leaves the memory in self unchanged.

Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.

See ptr::read_volatile for safety concerns and examples.

1.26.0 (const: 1.71.0) · source

pub const unsafe fn read_unaligned(self) -> T
where T: Sized,

Reads the value from self without moving it. This leaves the memory in self unchanged.

Unlike read, the pointer may be unaligned.

See ptr::read_unaligned for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn copy_to(self, dest: *mut T, count: usize)
where T: Sized,

Copies count * size_of<T> bytes from self to dest. The source and destination may overlap.

NOTE: this has the same argument order as ptr::copy.

See ptr::copy for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
where T: Sized,

Copies count * size_of<T> bytes from self to dest. The source and destination may not overlap.

NOTE: this has the same argument order as ptr::copy_nonoverlapping.

See ptr::copy_nonoverlapping for safety concerns and examples.

1.36.0 (const: unstable) · source

pub fn align_offset(self, align: usize) -> usize
where T: Sized,

Computes the offset that needs to be applied to the pointer in order to make it aligned to align.

If it is not possible to align the pointer, the implementation returns usize::MAX.

The offset is expressed in number of T elements, and not bytes. The value returned can be used with the wrapping_add method.

There are no guarantees whatsoever that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.

When this is called during compile-time evaluation (which is unstable), the implementation may return usize::MAX in cases where that can never happen at runtime. This is because the actual alignment of pointers is not known yet during compile-time, so an offset with guaranteed alignment can sometimes not be computed. For example, a buffer declared as [u8; N] might be allocated at an odd or an even address, but at compile-time this is not yet known, so the execution has to be correct for either choice. It is therefore impossible to find an offset that is guaranteed to be 2-aligned. (This behavior is subject to change, as usual for unstable APIs.)

§Panics

The function panics if align is not a power-of-two.

§Examples

Accessing adjacent u8 as u16

use std::mem::align_of;

let x = [5_u8, 6, 7, 8, 9];
let ptr = x.as_ptr();
let offset = ptr.align_offset(align_of::<u16>());

if offset < x.len() - 1 {
    let u16_ptr = ptr.add(offset).cast::<u16>();
    assert!(*u16_ptr == u16::from_ne_bytes([5, 6]) || *u16_ptr == u16::from_ne_bytes([6, 7]));
} else {
    // while the pointer can be aligned via `offset`, it would point
    // outside the allocation
}
Run
1.79.0 (const: unstable) · source

pub fn is_aligned(self) -> bool
where T: Sized,

Returns whether the pointer is properly aligned for T.

§Examples
// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

let data = AlignedI32(42);
let ptr = &data as *const AlignedI32;

assert!(ptr.is_aligned());
assert!(!ptr.wrapping_byte_add(1).is_aligned());
Run
§At compiletime

Note: Alignment at compiletime is experimental and subject to change. See the tracking issue for details.

At compiletime, the compiler may not know where a value will end up in memory. Calling this function on a pointer created from a reference at compiletime will only return true if the pointer is guaranteed to be aligned. This means that the pointer is never aligned if cast to a type with a stricter alignment than the reference’s underlying allocation.

#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of primitives is less than their size.
#[repr(align(4))]
struct AlignedI32(i32);
#[repr(align(8))]
struct AlignedI64(i64);

const _: () = {
    let data = AlignedI32(42);
    let ptr = &data as *const AlignedI32;
    assert!(ptr.is_aligned());

    // At runtime either `ptr1` or `ptr2` would be aligned, but at compiletime neither is aligned.
    let ptr1 = ptr.cast::<AlignedI64>();
    let ptr2 = ptr.wrapping_add(1).cast::<AlignedI64>();
    assert!(!ptr1.is_aligned());
    assert!(!ptr2.is_aligned());
};
Run

Due to this behavior, it is possible that a runtime pointer derived from a compiletime pointer is aligned, even if the compiletime pointer wasn’t aligned.

#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of primitives is less than their size.
#[repr(align(4))]
struct AlignedI32(i32);
#[repr(align(8))]
struct AlignedI64(i64);

// At compiletime, neither `COMPTIME_PTR` nor `COMPTIME_PTR + 1` is aligned.
const COMPTIME_PTR: *const AlignedI32 = &AlignedI32(42);
const _: () = assert!(!COMPTIME_PTR.cast::<AlignedI64>().is_aligned());
const _: () = assert!(!COMPTIME_PTR.wrapping_add(1).cast::<AlignedI64>().is_aligned());

// At runtime, either `runtime_ptr` or `runtime_ptr + 1` is aligned.
let runtime_ptr = COMPTIME_PTR;
assert_ne!(
    runtime_ptr.cast::<AlignedI64>().is_aligned(),
    runtime_ptr.wrapping_add(1).cast::<AlignedI64>().is_aligned(),
);
Run

If a pointer is created from a fixed address, this function behaves the same during runtime and compiletime.

#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of primitives is less than their size.
#[repr(align(4))]
struct AlignedI32(i32);
#[repr(align(8))]
struct AlignedI64(i64);

const _: () = {
    let ptr = 40 as *const AlignedI32;
    assert!(ptr.is_aligned());

    // For pointers with a known address, runtime and compiletime behavior are identical.
    let ptr1 = ptr.cast::<AlignedI64>();
    let ptr2 = ptr.wrapping_add(1).cast::<AlignedI64>();
    assert!(ptr1.is_aligned());
    assert!(!ptr2.is_aligned());
};
Run
source

pub const fn is_aligned_to(self, align: usize) -> bool

🔬This is a nightly-only experimental API. (pointer_is_aligned_to #96284)

Returns whether the pointer is aligned to align.

For non-Sized pointees this operation considers only the data pointer, ignoring the metadata.

§Panics

The function panics if align is not a power-of-two (this includes 0).

§Examples
#![feature(pointer_is_aligned_to)]

// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

let data = AlignedI32(42);
let ptr = &data as *const AlignedI32;

assert!(ptr.is_aligned_to(1));
assert!(ptr.is_aligned_to(2));
assert!(ptr.is_aligned_to(4));

assert!(ptr.wrapping_byte_add(2).is_aligned_to(2));
assert!(!ptr.wrapping_byte_add(2).is_aligned_to(4));

assert_ne!(ptr.is_aligned_to(8), ptr.wrapping_add(1).is_aligned_to(8));
Run
§At compiletime

Note: Alignment at compiletime is experimental and subject to change. See the tracking issue for details.

At compiletime, the compiler may not know where a value will end up in memory. Calling this function on a pointer created from a reference at compiletime will only return true if the pointer is guaranteed to be aligned. This means that the pointer cannot be stricter aligned than the reference’s underlying allocation.

#![feature(pointer_is_aligned_to)]
#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

const _: () = {
    let data = AlignedI32(42);
    let ptr = &data as *const AlignedI32;

    assert!(ptr.is_aligned_to(1));
    assert!(ptr.is_aligned_to(2));
    assert!(ptr.is_aligned_to(4));

    // At compiletime, we know for sure that the pointer isn't aligned to 8.
    assert!(!ptr.is_aligned_to(8));
    assert!(!ptr.wrapping_add(1).is_aligned_to(8));
};
Run

Due to this behavior, it is possible that a runtime pointer derived from a compiletime pointer is aligned, even if the compiletime pointer wasn’t aligned.

#![feature(pointer_is_aligned_to)]
#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

// At compiletime, neither `COMPTIME_PTR` nor `COMPTIME_PTR + 1` is aligned.
const COMPTIME_PTR: *const AlignedI32 = &AlignedI32(42);
const _: () = assert!(!COMPTIME_PTR.is_aligned_to(8));
const _: () = assert!(!COMPTIME_PTR.wrapping_add(1).is_aligned_to(8));

// At runtime, either `runtime_ptr` or `runtime_ptr + 1` is aligned.
let runtime_ptr = COMPTIME_PTR;
assert_ne!(
    runtime_ptr.is_aligned_to(8),
    runtime_ptr.wrapping_add(1).is_aligned_to(8),
);
Run

If a pointer is created from a fixed address, this function behaves the same during runtime and compiletime.

#![feature(pointer_is_aligned_to)]
#![feature(const_pointer_is_aligned)]

const _: () = {
    let ptr = 40 as *const u8;
    assert!(ptr.is_aligned_to(1));
    assert!(ptr.is_aligned_to(2));
    assert!(ptr.is_aligned_to(4));
    assert!(ptr.is_aligned_to(8));
    assert!(!ptr.is_aligned_to(16));
};
Run
source§

impl<T> *const [T]

1.79.0 (const: 1.79.0) · source

pub const fn len(self) -> usize

Returns the length of a raw slice.

The returned value is the number of elements, not the number of bytes.

This function is safe, even when the raw slice cannot be cast to a slice reference because the pointer is null or unaligned.

§Examples
use std::ptr;

let slice: *const [i8] = ptr::slice_from_raw_parts(ptr::null(), 3);
assert_eq!(slice.len(), 3);
Run
1.79.0 (const: 1.79.0) · source

pub const fn is_empty(self) -> bool

Returns true if the raw slice has a length of 0.

§Examples
use std::ptr;

let slice: *const [i8] = ptr::slice_from_raw_parts(ptr::null(), 3);
assert!(!slice.is_empty());
Run
source

pub const fn as_ptr(self) -> *const T

🔬This is a nightly-only experimental API. (slice_ptr_get #74265)

Returns a raw pointer to the slice’s buffer.

This is equivalent to casting self to *const T, but more type-safe.

§Examples
#![feature(slice_ptr_get)]
use std::ptr;

let slice: *const [i8] = ptr::slice_from_raw_parts(ptr::null(), 3);
assert_eq!(slice.as_ptr(), ptr::null());
Run
source

pub unsafe fn get_unchecked<I>(self, index: I) -> *const I::Output
where I: SliceIndex<[T]>,

🔬This is a nightly-only experimental API. (slice_ptr_get #74265)

Returns a raw pointer to an element or subslice, without doing bounds checking.

Calling this method with an out-of-bounds index or when self is not dereferenceable is undefined behavior even if the resulting pointer is not used.

§Examples
#![feature(slice_ptr_get)]

let x = &[1, 2, 4] as *const [i32];

unsafe {
    assert_eq!(x.get_unchecked(1), x.as_ptr().add(1));
}
Run
source

pub const unsafe fn as_uninit_slice<'a>(self) -> Option<&'a [MaybeUninit<T>]>

🔬This is a nightly-only experimental API. (ptr_as_uninit #75402)

Returns None if the pointer is null, or else returns a shared slice to the value wrapped in Some. In contrast to as_ref, this does not require that the value has to be initialized.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be valid for reads for ptr.len() * mem::size_of::<T>() many bytes, and it must be properly aligned. This means in particular:

    • The entire memory range of this slice must be contained within a single allocated object! Slices can never span across multiple allocated objects.

    • The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as data for zero-length slices using NonNull::dangling().

  • The total size ptr.len() * mem::size_of::<T>() of the slice must be no larger than isize::MAX. See the safety documentation of pointer::offset.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused!

See also slice::from_raw_parts.

source§

impl<T, const N: usize> *const [T; N]

source

pub const fn as_ptr(self) -> *const T

🔬This is a nightly-only experimental API. (array_ptr_get #119834)

Returns a raw pointer to the array’s buffer.

This is equivalent to casting self to *const T, but more type-safe.

§Examples
#![feature(array_ptr_get)]
use std::ptr;

let arr: *const [i8; 3] = ptr::null();
assert_eq!(arr.as_ptr(), ptr::null());
Run
source

pub const fn as_slice(self) -> *const [T]

🔬This is a nightly-only experimental API. (array_ptr_get #119834)

Returns a raw pointer to a slice containing the entire array.

§Examples
#![feature(array_ptr_get)]

let arr: *const [i32; 3] = &[1, 2, 4] as *const [i32; 3];
let slice: *const [i32] = arr.as_slice();
assert_eq!(slice.len(), 3);
Run
source§

impl<T: ?Sized> *mut T

1.0.0 (const: unstable) · source

pub fn is_null(self) -> bool

Returns true if the pointer is null.

Note that unsized types have many possible null pointers, as only the raw data pointer is considered, not their length, vtable, etc. Therefore, two pointers that are null may still not compare equal to each other.

§Behavior during const evaluation

When this function is used during const evaluation, it may return false for pointers that turn out to be null at runtime. Specifically, when a pointer to some memory is offset beyond its bounds in such a way that the resulting pointer is null, the function will still return false. There is no way for CTFE to know the absolute position of that memory, so we cannot tell if the pointer is null or not.

§Examples
let mut s = [1, 2, 3];
let ptr: *mut u32 = s.as_mut_ptr();
assert!(!ptr.is_null());
Run
1.38.0 (const: 1.38.0) · source

pub const fn cast<U>(self) -> *mut U

Casts to a pointer of another type.

source

pub const fn with_metadata_of<U>(self, meta: *const U) -> *mut U
where U: ?Sized,

🔬This is a nightly-only experimental API. (set_ptr_value #75091)

Use the pointer value in a new pointer of another type.

In case meta is a (fat) pointer to an unsized type, this operation will ignore the pointer part, whereas for (thin) pointers to sized types, this has the same effect as a simple cast.

The resulting pointer will have provenance of self, i.e., for a fat pointer, this operation is semantically the same as creating a new fat pointer with the data pointer value of self but the metadata of meta.

§Examples

This function is primarily useful for allowing byte-wise pointer arithmetic on potentially fat pointers:

#![feature(set_ptr_value)]
let mut arr: [i32; 3] = [1, 2, 3];
let mut ptr = arr.as_mut_ptr() as *mut dyn Debug;
let thin = ptr as *mut u8;
unsafe {
    ptr = thin.add(8).with_metadata_of(ptr);
    println!("{:?}", &*ptr); // will print "3"
}
Run
1.65.0 (const: 1.65.0) · source

pub const fn cast_const(self) -> *const T

Changes constness without changing the type.

This is a bit safer than as because it wouldn’t silently change the type if the code is refactored.

While not strictly required (*mut T coerces to *const T), this is provided for symmetry with cast_mut on *const T and may have documentation value if used instead of implicit coercion.

source

pub fn addr(self) -> usize

🔬This is a nightly-only experimental API. (strict_provenance #95228)

Gets the “address” portion of the pointer.

This is similar to self as usize, which semantically discards provenance and address-space information. However, unlike self as usize, casting the returned address back to a pointer yields a pointer without provenance, which is undefined behavior to dereference. To properly restore the lost information and obtain a dereferenceable pointer, use with_addr or map_addr.

If using those APIs is not possible because there is no way to preserve a pointer with the required provenance, then Strict Provenance might not be for you. Use pointer-integer casts or expose_provenance and with_exposed_provenance instead. However, note that this makes your code less portable and less amenable to tools that check for compliance with the Rust memory model.

On most platforms this will produce a value with the same bytes as the original pointer, because all the bytes are dedicated to describing the address. Platforms which need to store additional information in the pointer may perform a change of representation to produce a value containing only the address portion of the pointer. What that means is up to the platform to define.

This API and its claimed semantics are part of the Strict Provenance experiment, and as such might change in the future (including possibly weakening this so it becomes wholly equivalent to self as usize). See the module documentation for details.

source

pub fn expose_provenance(self) -> usize

🔬This is a nightly-only experimental API. (exposed_provenance #95228)

Exposes the “provenance” part of the pointer for future use in with_exposed_provenance and returns the “address” portion.

This is equivalent to self as usize, which semantically discards provenance and address-space information. Furthermore, this (like the as cast) has the implicit side-effect of marking the provenance as ‘exposed’, so on platforms that support it you can later call with_exposed_provenance_mut to reconstitute the original pointer including its provenance. (Reconstructing address space information, if required, is your responsibility.)

Using this method means that code is not following Strict Provenance rules. Supporting with_exposed_provenance_mut complicates specification and reasoning and may not be supported by tools that help you to stay conformant with the Rust memory model, so it is recommended to use addr wherever possible.

On most platforms this will produce a value with the same bytes as the original pointer, because all the bytes are dedicated to describing the address. Platforms which need to store additional information in the pointer may not support this operation, since the ‘expose’ side-effect which is required for with_exposed_provenance_mut to work is typically not available.

It is unclear whether this method can be given a satisfying unambiguous specification. This API and its claimed semantics are part of Exposed Provenance.

source

pub fn with_addr(self, addr: usize) -> Self

🔬This is a nightly-only experimental API. (strict_provenance #95228)

Creates a new pointer with the given address.

This performs the same operation as an addr as ptr cast, but copies the address-space and provenance of self to the new pointer. This allows us to dynamically preserve and propagate this important information in a way that is otherwise impossible with a unary cast.

This is equivalent to using wrapping_offset to offset self to the given address, and therefore has all the same capabilities and restrictions.

This API and its claimed semantics are an extension to the Strict Provenance experiment, see the module documentation for details.

source

pub fn map_addr(self, f: impl FnOnce(usize) -> usize) -> Self

🔬This is a nightly-only experimental API. (strict_provenance #95228)

Creates a new pointer by mapping self’s address to a new one.

This is a convenience for with_addr, see that method for details.

This API and its claimed semantics are part of the Strict Provenance experiment, see the module documentation for details.

source

pub const fn to_raw_parts(self) -> (*mut (), <T as Pointee>::Metadata)

🔬This is a nightly-only experimental API. (ptr_metadata #81513)

Decompose a (possibly wide) pointer into its data pointer and metadata components.

The pointer can be later reconstructed with from_raw_parts_mut.

1.9.0 (const: unstable) · source

pub unsafe fn as_ref<'a>(self) -> Option<&'a T>

Returns None if the pointer is null, or else returns a shared reference to the value wrapped in Some. If the value may be uninitialized, as_uninit_ref must be used instead.

For the mutable counterpart see as_mut.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • The pointer must point to an initialized instance of T.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)

§Examples
let ptr: *mut u8 = &mut 10u8 as *mut u8;

unsafe {
    if let Some(val_back) = ptr.as_ref() {
        println!("We got back the value: {val_back}!");
    }
}
Run
§Null-unchecked version

If you are sure the pointer can never be null and are looking for some kind of as_ref_unchecked that returns the &T instead of Option<&T>, know that you can dereference the pointer directly.

let ptr: *mut u8 = &mut 10u8 as *mut u8;

unsafe {
    let val_back = &*ptr;
    println!("We got back the value: {val_back}!");
}
Run
source

pub const unsafe fn as_ref_unchecked<'a>(self) -> &'a T

🔬This is a nightly-only experimental API. (ptr_as_ref_unchecked #122034)

Returns a shared reference to the value behind the pointer. If the pointer may be null or the value may be uninitialized, as_uninit_ref must be used instead. If the pointer may be null, but the value is known to have been initialized, as_ref must be used instead.

For the mutable counterpart see as_mut_unchecked.

§Safety

When calling this method, you have to ensure that all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • The pointer must point to an initialized instance of T.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)

§Examples
#![feature(ptr_as_ref_unchecked)]
let ptr: *mut u8 = &mut 10u8 as *mut u8;

unsafe {
    println!("We got back the value: {}!", ptr.as_ref_unchecked());
}
Run
source

pub const unsafe fn as_uninit_ref<'a>(self) -> Option<&'a MaybeUninit<T>>
where T: Sized,

🔬This is a nightly-only experimental API. (ptr_as_uninit #75402)

Returns None if the pointer is null, or else returns a shared reference to the value wrapped in Some. In contrast to as_ref, this does not require that the value has to be initialized.

For the mutable counterpart see as_uninit_mut.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused!

§Examples
#![feature(ptr_as_uninit)]

let ptr: *mut u8 = &mut 10u8 as *mut u8;

unsafe {
    if let Some(val_back) = ptr.as_uninit_ref() {
        println!("We got back the value: {}!", val_back.assume_init());
    }
}
Run
1.0.0 (const: 1.61.0) · source

pub const unsafe fn offset(self, count: isize) -> *mut T
where T: Sized,

Adds an offset to a pointer.

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • The computed offset, count * size_of::<T>() bytes, must not overflow isize.

  • If the computed offset is non-zero, then self must be derived from a pointer to some allocated object, and the entire memory range between self and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.

Allocated objects can never be larger than isize::MAX bytes, so if the computed offset stays in bounds of the allocated object, it is guaranteed to satisfy the first requirement. This implies, for instance, that vec.as_ptr().add(vec.len()) (for vec: Vec<T>) is always safe.

Consider using wrapping_offset instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

§Examples
let mut s = [1, 2, 3];
let ptr: *mut u32 = s.as_mut_ptr();

unsafe {
    assert_eq!(2, *ptr.offset(1));
    assert_eq!(3, *ptr.offset(2));
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_offset(self, count: isize) -> Self

Calculates the offset from a pointer in bytes.

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using offset on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.16.0 (const: 1.61.0) · source

pub const fn wrapping_offset(self, count: isize) -> *mut T
where T: Sized,

Calculates the offset from a pointer using wrapping arithmetic. count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

This operation itself is always safe, but using the resulting pointer is not.

The resulting pointer “remembers” the allocated object that self points to; it must not be used to read or write other allocated objects.

In other words, let z = x.wrapping_offset((y as isize) - (x as isize)) does not make z the same as y even if we assume T has size 1 and there is no overflow: z is still attached to the object x is attached to, and dereferencing it is Undefined Behavior unless x and y point into the same allocated object.

Compared to offset, this method basically delays the requirement of staying within the same allocated object: offset is immediate Undefined Behavior when crossing object boundaries; wrapping_offset produces a pointer but still leads to Undefined Behavior if a pointer is dereferenced when it is out-of-bounds of the object it is attached to. offset can be optimized better and is thus preferable in performance-sensitive code.

The delayed check only considers the value of the pointer that was dereferenced, not the intermediate values used during the computation of the final result. For example, x.wrapping_offset(o).wrapping_offset(o.wrapping_neg()) is always the same as x. In other words, leaving the allocated object and then re-entering it later is permitted.

§Examples
// Iterate using a raw pointer in increments of two elements
let mut data = [1u8, 2, 3, 4, 5];
let mut ptr: *mut u8 = data.as_mut_ptr();
let step = 2;
let end_rounded_up = ptr.wrapping_offset(6);

while ptr != end_rounded_up {
    unsafe {
        *ptr = 0;
    }
    ptr = ptr.wrapping_offset(step);
}
assert_eq!(&data, &[0, 2, 0, 4, 0]);
Run
1.75.0 (const: 1.75.0) · source

pub const fn wrapping_byte_offset(self, count: isize) -> Self

Calculates the offset from a pointer in bytes using wrapping arithmetic.

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using wrapping_offset on it. See that method for documentation.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

source

pub fn mask(self, mask: usize) -> *mut T

🔬This is a nightly-only experimental API. (ptr_mask #98290)

Masks out bits of the pointer according to a mask.

This is convenience for ptr.map_addr(|a| a & mask).

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

§Examples
#![feature(ptr_mask, strict_provenance)]
let mut v = 17_u32;
let ptr: *mut u32 = &mut v;

// `u32` is 4 bytes aligned,
// which means that lower 2 bits are always 0.
let tag_mask = 0b11;
let ptr_mask = !tag_mask;

// We can store something in these lower bits
let tagged_ptr = ptr.map_addr(|a| a | 0b10);

// Get the "tag" back
let tag = tagged_ptr.addr() & tag_mask;
assert_eq!(tag, 0b10);

// Note that `tagged_ptr` is unaligned, it's UB to read from/write to it.
// To get original pointer `mask` can be used:
let masked_ptr = tagged_ptr.mask(ptr_mask);
assert_eq!(unsafe { *masked_ptr }, 17);

unsafe { *masked_ptr = 0 };
assert_eq!(v, 0);
Run
1.9.0 (const: unstable) · source

pub unsafe fn as_mut<'a>(self) -> Option<&'a mut T>

Returns None if the pointer is null, or else returns a unique reference to the value wrapped in Some. If the value may be uninitialized, as_uninit_mut must be used instead.

For the shared counterpart see as_ref.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • The pointer must point to an initialized instance of T.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer.

This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)

§Examples
let mut s = [1, 2, 3];
let ptr: *mut u32 = s.as_mut_ptr();
let first_value = unsafe { ptr.as_mut().unwrap() };
*first_value = 4;
println!("{s:?}"); // It'll print: "[4, 2, 3]".
Run
§Null-unchecked version

If you are sure the pointer can never be null and are looking for some kind of as_mut_unchecked that returns the &mut T instead of Option<&mut T>, know that you can dereference the pointer directly.

let mut s = [1, 2, 3];
let ptr: *mut u32 = s.as_mut_ptr();
let first_value = unsafe { &mut *ptr };
*first_value = 4;
println!("{s:?}"); // It'll print: "[4, 2, 3]".
Run
source

pub const unsafe fn as_mut_unchecked<'a>(self) -> &'a mut T

🔬This is a nightly-only experimental API. (ptr_as_ref_unchecked #122034)

Returns a unique reference to the value behind the pointer. If the pointer may be null or the value may be uninitialized, as_uninit_mut must be used instead. If the pointer may be null, but the value is known to have been initialized, as_mut must be used instead.

For the shared counterpart see as_ref_unchecked.

§Safety

When calling this method, you have to ensure that all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • The pointer must point to an initialized instance of T.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)

§Examples
#![feature(ptr_as_ref_unchecked)]
let mut s = [1, 2, 3];
let ptr: *mut u32 = s.as_mut_ptr();
let first_value = unsafe { ptr.as_mut_unchecked() };
*first_value = 4;
println!("{s:?}"); // It'll print: "[4, 2, 3]".
Run
source

pub const unsafe fn as_uninit_mut<'a>(self) -> Option<&'a mut MaybeUninit<T>>
where T: Sized,

🔬This is a nightly-only experimental API. (ptr_as_uninit #75402)

Returns None if the pointer is null, or else returns a unique reference to the value wrapped in Some. In contrast to as_mut, this does not require that the value has to be initialized.

For the shared counterpart see as_uninit_ref.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be properly aligned.

  • It must be “dereferenceable” in the sense defined in the module documentation.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer.

This applies even if the result of this method is unused!

source

pub const fn guaranteed_eq(self, other: *mut T) -> Option<bool>
where T: Sized,

🔬This is a nightly-only experimental API. (const_raw_ptr_comparison #53020)

Returns whether two pointers are guaranteed to be equal.

At runtime this function behaves like Some(self == other). However, in some contexts (e.g., compile-time evaluation), it is not always possible to determine equality of two pointers, so this function may spuriously return None for pointers that later actually turn out to have its equality known. But when it returns Some, the pointers’ equality is guaranteed to be known.

The return value may change from Some to None and vice versa depending on the compiler version and unsafe code must not rely on the result of this function for soundness. It is suggested to only use this function for performance optimizations where spurious None return values by this function do not affect the outcome, but just the performance. The consequences of using this method to make runtime and compile-time code behave differently have not been explored. This method should not be used to introduce such differences, and it should also not be stabilized before we have a better understanding of this issue.

source

pub const fn guaranteed_ne(self, other: *mut T) -> Option<bool>
where T: Sized,

🔬This is a nightly-only experimental API. (const_raw_ptr_comparison #53020)

Returns whether two pointers are guaranteed to be inequal.

At runtime this function behaves like Some(self != other). However, in some contexts (e.g., compile-time evaluation), it is not always possible to determine inequality of two pointers, so this function may spuriously return None for pointers that later actually turn out to have its inequality known. But when it returns Some, the pointers’ inequality is guaranteed to be known.

The return value may change from Some to None and vice versa depending on the compiler version and unsafe code must not rely on the result of this function for soundness. It is suggested to only use this function for performance optimizations where spurious None return values by this function do not affect the outcome, but just the performance. The consequences of using this method to make runtime and compile-time code behave differently have not been explored. This method should not be used to introduce such differences, and it should also not be stabilized before we have a better understanding of this issue.

1.47.0 (const: 1.65.0) · source

pub const unsafe fn offset_from(self, origin: *const T) -> isize
where T: Sized,

Calculates the distance between two pointers. The returned value is in units of T: the distance in bytes divided by mem::size_of::<T>().

This is equivalent to (self as isize - origin as isize) / (mem::size_of::<T>() as isize), except that it has a lot more opportunities for UB, in exchange for the compiler better understanding what you are doing.

The primary motivation of this method is for computing the len of an array/slice of T that you are currently representing as a “start” and “end” pointer (and “end” is “one past the end” of the array). In that case, end.offset_from(start) gets you the length of the array.

All of the following safety requirements are trivially satisfied for this usecase.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • self and origin must either

    • point to the same address, or
    • both be derived from a pointer to the same allocated object, and the memory range between the two pointers must be in bounds of that object. (See below for an example.)
  • The distance between the pointers, in bytes, must be an exact multiple of the size of T.

As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without “wrapping around”), cannot overflow an isize. This is implied by the in-bounds requirement, and the fact that no allocated object can be larger than isize::MAX bytes.

The requirement for pointers to be derived from the same allocated object is primarily needed for const-compatibility: the distance between pointers into different allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use (self as isize - origin as isize) / mem::size_of::<T>().

§Panics

This function panics if T is a Zero-Sized Type (“ZST”).

§Examples

Basic usage:

let mut a = [0; 5];
let ptr1: *mut i32 = &mut a[1];
let ptr2: *mut i32 = &mut a[3];
unsafe {
    assert_eq!(ptr2.offset_from(ptr1), 2);
    assert_eq!(ptr1.offset_from(ptr2), -2);
    assert_eq!(ptr1.offset(2), ptr2);
    assert_eq!(ptr2.offset(-2), ptr1);
}
Run

Incorrect usage:

let ptr1 = Box::into_raw(Box::new(0u8));
let ptr2 = Box::into_raw(Box::new(1u8));
let diff = (ptr2 as isize).wrapping_sub(ptr1 as isize);
// Make ptr2_other an "alias" of ptr2.add(1), but derived from ptr1.
let ptr2_other = (ptr1 as *mut u8).wrapping_offset(diff).wrapping_offset(1);
assert_eq!(ptr2 as usize, ptr2_other as usize);
// Since ptr2_other and ptr2 are derived from pointers to different objects,
// computing their offset is undefined behavior, even though
// they point to addresses that are in-bounds of the same object!
unsafe {
    let one = ptr2_other.offset_from(ptr2); // Undefined Behavior! ⚠️
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_offset_from<U: ?Sized>(self, origin: *const U) -> isize

Calculates the distance between two pointers. The returned value is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using offset_from on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation considers only the data pointers, ignoring the metadata.

source

pub const unsafe fn sub_ptr(self, origin: *const T) -> usize
where T: Sized,

🔬This is a nightly-only experimental API. (ptr_sub_ptr #95892)

Calculates the distance between two pointers, where it’s known that self is equal to or greater than origin. The returned value is in units of T: the distance in bytes is divided by mem::size_of::<T>().

This computes the same value that offset_from would compute, but with the added precondition that the offset is guaranteed to be non-negative. This method is equivalent to usize::try_from(self.offset_from(origin)).unwrap_unchecked(), but it provides slightly more information to the optimizer, which can sometimes allow it to optimize slightly better with some backends.

This method can be though of as recovering the count that was passed to add (or, with the parameters in the other order, to sub). The following are all equivalent, assuming that their safety preconditions are met:

ptr.sub_ptr(origin) == count
origin.add(count) == ptr
ptr.sub(count) == origin
Run
§Safety
  • The distance between the pointers must be non-negative (self >= origin)

  • All the safety conditions of offset_from apply to this method as well; see it for the full details.

Importantly, despite the return type of this method being able to represent a larger offset, it’s still not permitted to pass pointers which differ by more than isize::MAX bytes. As such, the result of this method will always be less than or equal to isize::MAX as usize.

§Panics

This function panics if T is a Zero-Sized Type (“ZST”).

§Examples
#![feature(ptr_sub_ptr)]

let mut a = [0; 5];
let p: *mut i32 = a.as_mut_ptr();
unsafe {
    let ptr1: *mut i32 = p.add(1);
    let ptr2: *mut i32 = p.add(3);

    assert_eq!(ptr2.sub_ptr(ptr1), 2);
    assert_eq!(ptr1.add(2), ptr2);
    assert_eq!(ptr2.sub(2), ptr1);
    assert_eq!(ptr2.sub_ptr(ptr2), 0);
}

// This would be incorrect, as the pointers are not correctly ordered:
// ptr1.offset_from(ptr2)
Run
1.26.0 (const: 1.61.0) · source

pub const unsafe fn add(self, count: usize) -> Self
where T: Sized,

Adds an offset to a pointer (convenience for .offset(count as isize)).

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • The computed offset, count * size_of::<T>() bytes, must not overflow isize.

  • If the computed offset is non-zero, then self must be derived from a pointer to some allocated object, and the entire memory range between self and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.

Allocated objects can never be larger than isize::MAX bytes, so if the computed offset stays in bounds of the allocated object, it is guaranteed to satisfy the first requirement. This implies, for instance, that vec.as_ptr().add(vec.len()) (for vec: Vec<T>) is always safe.

Consider using wrapping_add instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

§Examples
let s: &str = "123";
let ptr: *const u8 = s.as_ptr();

unsafe {
    assert_eq!('2', *ptr.add(1) as char);
    assert_eq!('3', *ptr.add(2) as char);
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_add(self, count: usize) -> Self

Calculates the offset from a pointer in bytes (convenience for .byte_offset(count as isize)).

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using add on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.61.0) · source

pub const unsafe fn sub(self, count: usize) -> Self
where T: Sized,

Subtracts an offset from a pointer (convenience for .offset((count as isize).wrapping_neg())).

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • The computed offset, count * size_of::<T>() bytes, must not overflow isize.

  • If the computed offset is non-zero, then self must be derived from a pointer to some allocated object, and the entire memory range between self and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.

Allocated objects can never be larger than isize::MAX bytes, so if the computed offset stays in bounds of the allocated object, it is guaranteed to satisfy the first requirement. This implies, for instance, that vec.as_ptr().add(vec.len()) (for vec: Vec<T>) is always safe.

Consider using wrapping_sub instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

§Examples
let s: &str = "123";

unsafe {
    let end: *const u8 = s.as_ptr().add(3);
    assert_eq!('3', *end.sub(1) as char);
    assert_eq!('2', *end.sub(2) as char);
}
Run
1.75.0 (const: 1.75.0) · source

pub const unsafe fn byte_sub(self, count: usize) -> Self

Calculates the offset from a pointer in bytes (convenience for .byte_offset((count as isize).wrapping_neg())).

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using sub on it. See that method for documentation and safety requirements.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.61.0) · source

pub const fn wrapping_add(self, count: usize) -> Self
where T: Sized,

Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset(count as isize))

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

This operation itself is always safe, but using the resulting pointer is not.

The resulting pointer “remembers” the allocated object that self points to; it must not be used to read or write other allocated objects.

In other words, let z = x.wrapping_add((y as usize) - (x as usize)) does not make z the same as y even if we assume T has size 1 and there is no overflow: z is still attached to the object x is attached to, and dereferencing it is Undefined Behavior unless x and y point into the same allocated object.

Compared to add, this method basically delays the requirement of staying within the same allocated object: add is immediate Undefined Behavior when crossing object boundaries; wrapping_add produces a pointer but still leads to Undefined Behavior if a pointer is dereferenced when it is out-of-bounds of the object it is attached to. add can be optimized better and is thus preferable in performance-sensitive code.

The delayed check only considers the value of the pointer that was dereferenced, not the intermediate values used during the computation of the final result. For example, x.wrapping_add(o).wrapping_sub(o) is always the same as x. In other words, leaving the allocated object and then re-entering it later is permitted.

§Examples
// Iterate using a raw pointer in increments of two elements
let data = [1u8, 2, 3, 4, 5];
let mut ptr: *const u8 = data.as_ptr();
let step = 2;
let end_rounded_up = ptr.wrapping_add(6);

// This loop prints "1, 3, 5, "
while ptr != end_rounded_up {
    unsafe {
        print!("{}, ", *ptr);
    }
    ptr = ptr.wrapping_add(step);
}
Run
1.75.0 (const: 1.75.0) · source

pub const fn wrapping_byte_add(self, count: usize) -> Self

Calculates the offset from a pointer in bytes using wrapping arithmetic. (convenience for .wrapping_byte_offset(count as isize))

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using wrapping_add on it. See that method for documentation.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.61.0) · source

pub const fn wrapping_sub(self, count: usize) -> Self
where T: Sized,

Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset((count as isize).wrapping_neg()))

count is in units of T; e.g., a count of 3 represents a pointer offset of 3 * size_of::<T>() bytes.

§Safety

This operation itself is always safe, but using the resulting pointer is not.

The resulting pointer “remembers” the allocated object that self points to; it must not be used to read or write other allocated objects.

In other words, let z = x.wrapping_sub((x as usize) - (y as usize)) does not make z the same as y even if we assume T has size 1 and there is no overflow: z is still attached to the object x is attached to, and dereferencing it is Undefined Behavior unless x and y point into the same allocated object.

Compared to sub, this method basically delays the requirement of staying within the same allocated object: sub is immediate Undefined Behavior when crossing object boundaries; wrapping_sub produces a pointer but still leads to Undefined Behavior if a pointer is dereferenced when it is out-of-bounds of the object it is attached to. sub can be optimized better and is thus preferable in performance-sensitive code.

The delayed check only considers the value of the pointer that was dereferenced, not the intermediate values used during the computation of the final result. For example, x.wrapping_add(o).wrapping_sub(o) is always the same as x. In other words, leaving the allocated object and then re-entering it later is permitted.

§Examples
// Iterate using a raw pointer in increments of two elements (backwards)
let data = [1u8, 2, 3, 4, 5];
let mut ptr: *const u8 = data.as_ptr();
let start_rounded_down = ptr.wrapping_sub(2);
ptr = ptr.wrapping_add(4);
let step = 2;
// This loop prints "5, 3, 1, "
while ptr != start_rounded_down {
    unsafe {
        print!("{}, ", *ptr);
    }
    ptr = ptr.wrapping_sub(step);
}
Run
1.75.0 (const: 1.75.0) · source

pub const fn wrapping_byte_sub(self, count: usize) -> Self

Calculates the offset from a pointer in bytes using wrapping arithmetic. (convenience for .wrapping_offset((count as isize).wrapping_neg()))

count is in units of bytes.

This is purely a convenience for casting to a u8 pointer and using wrapping_sub on it. See that method for documentation.

For non-Sized pointees this operation changes only the data pointer, leaving the metadata untouched.

1.26.0 (const: 1.71.0) · source

pub const unsafe fn read(self) -> T
where T: Sized,

Reads the value from self without moving it. This leaves the memory in self unchanged.

See ptr::read for safety concerns and examples.

1.26.0 · source

pub unsafe fn read_volatile(self) -> T
where T: Sized,

Performs a volatile read of the value from self without moving it. This leaves the memory in self unchanged.

Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.

See ptr::read_volatile for safety concerns and examples.

1.26.0 (const: 1.71.0) · source

pub const unsafe fn read_unaligned(self) -> T
where T: Sized,

Reads the value from self without moving it. This leaves the memory in self unchanged.

Unlike read, the pointer may be unaligned.

See ptr::read_unaligned for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn copy_to(self, dest: *mut T, count: usize)
where T: Sized,

Copies count * size_of<T> bytes from self to dest. The source and destination may overlap.

NOTE: this has the same argument order as ptr::copy.

See ptr::copy for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
where T: Sized,

Copies count * size_of<T> bytes from self to dest. The source and destination may not overlap.

NOTE: this has the same argument order as ptr::copy_nonoverlapping.

See ptr::copy_nonoverlapping for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn copy_from(self, src: *const T, count: usize)
where T: Sized,

Copies count * size_of<T> bytes from src to self. The source and destination may overlap.

NOTE: this has the opposite argument order of ptr::copy.

See ptr::copy for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn copy_from_nonoverlapping(self, src: *const T, count: usize)
where T: Sized,

Copies count * size_of<T> bytes from src to self. The source and destination may not overlap.

NOTE: this has the opposite argument order of ptr::copy_nonoverlapping.

See ptr::copy_nonoverlapping for safety concerns and examples.

1.26.0 · source

pub unsafe fn drop_in_place(self)

Executes the destructor (if any) of the pointed-to value.

See ptr::drop_in_place for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn write(self, val: T)
where T: Sized,

Overwrites a memory location with the given value without reading or dropping the old value.

See ptr::write for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn write_bytes(self, val: u8, count: usize)
where T: Sized,

Invokes memset on the specified pointer, setting count * size_of::<T>() bytes of memory starting at self to val.

See ptr::write_bytes for safety concerns and examples.

1.26.0 · source

pub unsafe fn write_volatile(self, val: T)
where T: Sized,

Performs a volatile write of a memory location with the given value without reading or dropping the old value.

Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.

See ptr::write_volatile for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn write_unaligned(self, val: T)
where T: Sized,

Overwrites a memory location with the given value without reading or dropping the old value.

Unlike write, the pointer may be unaligned.

See ptr::write_unaligned for safety concerns and examples.

1.26.0 · source

pub unsafe fn replace(self, src: T) -> T
where T: Sized,

Replaces the value at self with src, returning the old value, without dropping either.

See ptr::replace for safety concerns and examples.

1.26.0 (const: unstable) · source

pub unsafe fn swap(self, with: *mut T)
where T: Sized,

Swaps the values at two mutable locations of the same type, without deinitializing either. They may overlap, unlike mem::swap which is otherwise equivalent.

See ptr::swap for safety concerns and examples.

1.36.0 (const: unstable) · source

pub fn align_offset(self, align: usize) -> usize
where T: Sized,

Computes the offset that needs to be applied to the pointer in order to make it aligned to align.

If it is not possible to align the pointer, the implementation returns usize::MAX.

The offset is expressed in number of T elements, and not bytes. The value returned can be used with the wrapping_add method.

There are no guarantees whatsoever that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.

When this is called during compile-time evaluation (which is unstable), the implementation may return usize::MAX in cases where that can never happen at runtime. This is because the actual alignment of pointers is not known yet during compile-time, so an offset with guaranteed alignment can sometimes not be computed. For example, a buffer declared as [u8; N] might be allocated at an odd or an even address, but at compile-time this is not yet known, so the execution has to be correct for either choice. It is therefore impossible to find an offset that is guaranteed to be 2-aligned. (This behavior is subject to change, as usual for unstable APIs.)

§Panics

The function panics if align is not a power-of-two.

§Examples

Accessing adjacent u8 as u16

use std::mem::align_of;

let mut x = [5_u8, 6, 7, 8, 9];
let ptr = x.as_mut_ptr();
let offset = ptr.align_offset(align_of::<u16>());

if offset < x.len() - 1 {
    let u16_ptr = ptr.add(offset).cast::<u16>();
    *u16_ptr = 0;

    assert!(x == [0, 0, 7, 8, 9] || x == [5, 0, 0, 8, 9]);
} else {
    // while the pointer can be aligned via `offset`, it would point
    // outside the allocation
}
Run
1.79.0 (const: unstable) · source

pub fn is_aligned(self) -> bool
where T: Sized,

Returns whether the pointer is properly aligned for T.

§Examples
// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

let mut data = AlignedI32(42);
let ptr = &mut data as *mut AlignedI32;

assert!(ptr.is_aligned());
assert!(!ptr.wrapping_byte_add(1).is_aligned());
Run
§At compiletime

Note: Alignment at compiletime is experimental and subject to change. See the tracking issue for details.

At compiletime, the compiler may not know where a value will end up in memory. Calling this function on a pointer created from a reference at compiletime will only return true if the pointer is guaranteed to be aligned. This means that the pointer is never aligned if cast to a type with a stricter alignment than the reference’s underlying allocation.

#![feature(const_pointer_is_aligned)]
#![feature(const_mut_refs)]

// On some platforms, the alignment of primitives is less than their size.
#[repr(align(4))]
struct AlignedI32(i32);
#[repr(align(8))]
struct AlignedI64(i64);

const _: () = {
    let mut data = AlignedI32(42);
    let ptr = &mut data as *mut AlignedI32;
    assert!(ptr.is_aligned());

    // At runtime either `ptr1` or `ptr2` would be aligned, but at compiletime neither is aligned.
    let ptr1 = ptr.cast::<AlignedI64>();
    let ptr2 = ptr.wrapping_add(1).cast::<AlignedI64>();
    assert!(!ptr1.is_aligned());
    assert!(!ptr2.is_aligned());
};
Run

Due to this behavior, it is possible that a runtime pointer derived from a compiletime pointer is aligned, even if the compiletime pointer wasn’t aligned.

#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of primitives is less than their size.
#[repr(align(4))]
struct AlignedI32(i32);
#[repr(align(8))]
struct AlignedI64(i64);

// At compiletime, neither `COMPTIME_PTR` nor `COMPTIME_PTR + 1` is aligned.
// Also, note that mutable references are not allowed in the final value of constants.
const COMPTIME_PTR: *mut AlignedI32 = (&AlignedI32(42) as *const AlignedI32).cast_mut();
const _: () = assert!(!COMPTIME_PTR.cast::<AlignedI64>().is_aligned());
const _: () = assert!(!COMPTIME_PTR.wrapping_add(1).cast::<AlignedI64>().is_aligned());

// At runtime, either `runtime_ptr` or `runtime_ptr + 1` is aligned.
let runtime_ptr = COMPTIME_PTR;
assert_ne!(
    runtime_ptr.cast::<AlignedI64>().is_aligned(),
    runtime_ptr.wrapping_add(1).cast::<AlignedI64>().is_aligned(),
);
Run

If a pointer is created from a fixed address, this function behaves the same during runtime and compiletime.

#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of primitives is less than their size.
#[repr(align(4))]
struct AlignedI32(i32);
#[repr(align(8))]
struct AlignedI64(i64);

const _: () = {
    let ptr = 40 as *mut AlignedI32;
    assert!(ptr.is_aligned());

    // For pointers with a known address, runtime and compiletime behavior are identical.
    let ptr1 = ptr.cast::<AlignedI64>();
    let ptr2 = ptr.wrapping_add(1).cast::<AlignedI64>();
    assert!(ptr1.is_aligned());
    assert!(!ptr2.is_aligned());
};
Run
source

pub const fn is_aligned_to(self, align: usize) -> bool

🔬This is a nightly-only experimental API. (pointer_is_aligned_to #96284)

Returns whether the pointer is aligned to align.

For non-Sized pointees this operation considers only the data pointer, ignoring the metadata.

§Panics

The function panics if align is not a power-of-two (this includes 0).

§Examples
#![feature(pointer_is_aligned_to)]

// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

let mut data = AlignedI32(42);
let ptr = &mut data as *mut AlignedI32;

assert!(ptr.is_aligned_to(1));
assert!(ptr.is_aligned_to(2));
assert!(ptr.is_aligned_to(4));

assert!(ptr.wrapping_byte_add(2).is_aligned_to(2));
assert!(!ptr.wrapping_byte_add(2).is_aligned_to(4));

assert_ne!(ptr.is_aligned_to(8), ptr.wrapping_add(1).is_aligned_to(8));
Run
§At compiletime

Note: Alignment at compiletime is experimental and subject to change. See the tracking issue for details.

At compiletime, the compiler may not know where a value will end up in memory. Calling this function on a pointer created from a reference at compiletime will only return true if the pointer is guaranteed to be aligned. This means that the pointer cannot be stricter aligned than the reference’s underlying allocation.

#![feature(pointer_is_aligned_to)]
#![feature(const_pointer_is_aligned)]
#![feature(const_mut_refs)]

// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

const _: () = {
    let mut data = AlignedI32(42);
    let ptr = &mut data as *mut AlignedI32;

    assert!(ptr.is_aligned_to(1));
    assert!(ptr.is_aligned_to(2));
    assert!(ptr.is_aligned_to(4));

    // At compiletime, we know for sure that the pointer isn't aligned to 8.
    assert!(!ptr.is_aligned_to(8));
    assert!(!ptr.wrapping_add(1).is_aligned_to(8));
};
Run

Due to this behavior, it is possible that a runtime pointer derived from a compiletime pointer is aligned, even if the compiletime pointer wasn’t aligned.

#![feature(pointer_is_aligned_to)]
#![feature(const_pointer_is_aligned)]

// On some platforms, the alignment of i32 is less than 4.
#[repr(align(4))]
struct AlignedI32(i32);

// At compiletime, neither `COMPTIME_PTR` nor `COMPTIME_PTR + 1` is aligned.
// Also, note that mutable references are not allowed in the final value of constants.
const COMPTIME_PTR: *mut AlignedI32 = (&AlignedI32(42) as *const AlignedI32).cast_mut();
const _: () = assert!(!COMPTIME_PTR.is_aligned_to(8));
const _: () = assert!(!COMPTIME_PTR.wrapping_add(1).is_aligned_to(8));

// At runtime, either `runtime_ptr` or `runtime_ptr + 1` is aligned.
let runtime_ptr = COMPTIME_PTR;
assert_ne!(
    runtime_ptr.is_aligned_to(8),
    runtime_ptr.wrapping_add(1).is_aligned_to(8),
);
Run

If a pointer is created from a fixed address, this function behaves the same during runtime and compiletime.

#![feature(pointer_is_aligned_to)]
#![feature(const_pointer_is_aligned)]

const _: () = {
    let ptr = 40 as *mut u8;
    assert!(ptr.is_aligned_to(1));
    assert!(ptr.is_aligned_to(2));
    assert!(ptr.is_aligned_to(4));
    assert!(ptr.is_aligned_to(8));
    assert!(!ptr.is_aligned_to(16));
};
Run
source§

impl<T> *mut [T]

1.79.0 (const: 1.79.0) · source

pub const fn len(self) -> usize

Returns the length of a raw slice.

The returned value is the number of elements, not the number of bytes.

This function is safe, even when the raw slice cannot be cast to a slice reference because the pointer is null or unaligned.

§Examples
use std::ptr;

let slice: *mut [i8] = ptr::slice_from_raw_parts_mut(ptr::null_mut(), 3);
assert_eq!(slice.len(), 3);
Run
1.79.0 (const: 1.79.0) · source

pub const fn is_empty(self) -> bool

Returns true if the raw slice has a length of 0.

§Examples
use std::ptr;

let slice: *mut [i8] = ptr::slice_from_raw_parts_mut(ptr::null_mut(), 3);
assert!(!slice.is_empty());
Run
source

pub unsafe fn split_at_mut(self, mid: usize) -> (*mut [T], *mut [T])

🔬This is a nightly-only experimental API. (raw_slice_split #95595)

Divides one mutable raw slice into two at an index.

The first will contain all indices from [0, mid) (excluding the index mid itself) and the second will contain all indices from [mid, len) (excluding the index len itself).

§Panics

Panics if mid > len.

§Safety

mid must be in-bounds of the underlying allocated object. Which means self must be dereferenceable and span a single allocation that is at least mid * size_of::<T>() bytes long. Not upholding these requirements is undefined behavior even if the resulting pointers are not used.

Since len being in-bounds it is not a safety invariant of *mut [T] the safety requirements of this method are the same as for split_at_mut_unchecked. The explicit bounds check is only as useful as len is correct.

§Examples
#![feature(raw_slice_split)]
#![feature(slice_ptr_get)]

let mut v = [1, 0, 3, 0, 5, 6];
let ptr = &mut v as *mut [_];
unsafe {
    let (left, right) = ptr.split_at_mut(2);
    assert_eq!(&*left, [1, 0]);
    assert_eq!(&*right, [3, 0, 5, 6]);
}
Run
source

pub unsafe fn split_at_mut_unchecked(self, mid: usize) -> (*mut [T], *mut [T])

🔬This is a nightly-only experimental API. (raw_slice_split #95595)

Divides one mutable raw slice into two at an index, without doing bounds checking.

The first will contain all indices from [0, mid) (excluding the index mid itself) and the second will contain all indices from [mid, len) (excluding the index len itself).

§Safety

mid must be in-bounds of the underlying [allocated object]. Which means self must be dereferenceable and span a single allocation that is at least mid * size_of::<T>() bytes long. Not upholding these requirements is undefined behavior even if the resulting pointers are not used.

§Examples
#![feature(raw_slice_split)]

let mut v = [1, 0, 3, 0, 5, 6];
// scoped to restrict the lifetime of the borrows
unsafe {
    let ptr = &mut v as *mut [_];
    let (left, right) = ptr.split_at_mut_unchecked(2);
    assert_eq!(&*left, [1, 0]);
    assert_eq!(&*right, [3, 0, 5, 6]);
    (&mut *left)[1] = 2;
    (&mut *right)[1] = 4;
}
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
Run
source

pub const fn as_mut_ptr(self) -> *mut T

🔬This is a nightly-only experimental API. (slice_ptr_get #74265)

Returns a raw pointer to the slice’s buffer.

This is equivalent to casting self to *mut T, but more type-safe.

§Examples
#![feature(slice_ptr_get)]
use std::ptr;

let slice: *mut [i8] = ptr::slice_from_raw_parts_mut(ptr::null_mut(), 3);
assert_eq!(slice.as_mut_ptr(), ptr::null_mut());
Run
source

pub unsafe fn get_unchecked_mut<I>(self, index: I) -> *mut I::Output
where I: SliceIndex<[T]>,

🔬This is a nightly-only experimental API. (slice_ptr_get #74265)

Returns a raw pointer to an element or subslice, without doing bounds checking.

Calling this method with an out-of-bounds index or when self is not dereferenceable is undefined behavior even if the resulting pointer is not used.

§Examples
#![feature(slice_ptr_get)]

let x = &mut [1, 2, 4] as *mut [i32];

unsafe {
    assert_eq!(x.get_unchecked_mut(1), x.as_mut_ptr().add(1));
}
Run
source

pub const unsafe fn as_uninit_slice<'a>(self) -> Option<&'a [MaybeUninit<T>]>

🔬This is a nightly-only experimental API. (ptr_as_uninit #75402)

Returns None if the pointer is null, or else returns a shared slice to the value wrapped in Some. In contrast to as_ref, this does not require that the value has to be initialized.

For the mutable counterpart see as_uninit_slice_mut.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be valid for reads for ptr.len() * mem::size_of::<T>() many bytes, and it must be properly aligned. This means in particular:

    • The entire memory range of this slice must be contained within a single allocated object! Slices can never span across multiple allocated objects.

    • The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as data for zero-length slices using NonNull::dangling().

  • The total size ptr.len() * mem::size_of::<T>() of the slice must be no larger than isize::MAX. See the safety documentation of pointer::offset.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside UnsafeCell).

This applies even if the result of this method is unused!

See also slice::from_raw_parts.

source

pub const unsafe fn as_uninit_slice_mut<'a>( self, ) -> Option<&'a mut [MaybeUninit<T>]>

🔬This is a nightly-only experimental API. (ptr_as_uninit #75402)

Returns None if the pointer is null, or else returns a unique slice to the value wrapped in Some. In contrast to as_mut, this does not require that the value has to be initialized.

For the shared counterpart see as_uninit_slice.

§Safety

When calling this method, you have to ensure that either the pointer is null or all of the following is true:

  • The pointer must be valid for reads and writes for ptr.len() * mem::size_of::<T>() many bytes, and it must be properly aligned. This means in particular:

    • The entire memory range of this slice must be contained within a single allocated object! Slices can never span across multiple allocated objects.

    • The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as data for zero-length slices using NonNull::dangling().

  • The total size ptr.len() * mem::size_of::<T>() of the slice must be no larger than isize::MAX. See the safety documentation of pointer::offset.

  • You must enforce Rust’s aliasing rules, since the returned lifetime 'a is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer.

This applies even if the result of this method is unused!

See also slice::from_raw_parts_mut.

source§

impl<T, const N: usize> *mut [T; N]

source

pub const fn as_mut_ptr(self) -> *mut T

🔬This is a nightly-only experimental API. (array_ptr_get #119834)

Returns a raw pointer to the array’s buffer.

This is equivalent to casting self to *mut T, but more type-safe.

§Examples
#![feature(array_ptr_get)]
use std::ptr;

let arr: *mut [i8; 3] = ptr::null_mut();
assert_eq!(arr.as_mut_ptr(), ptr::null_mut());
Run
source

pub const fn as_mut_slice(self) -> *mut [T]

🔬This is a nightly-only experimental API. (array_ptr_get #119834)

Returns a raw pointer to a mutable slice containing the entire array.

§Examples
#![feature(array_ptr_get)]

let mut arr = [1, 2, 5];
let ptr: *mut [i32; 3] = &mut arr;
unsafe {
    (&mut *ptr.as_mut_slice())[..2].copy_from_slice(&[3, 4]);
}
assert_eq!(arr, [3, 4, 5]);
Run

Trait Implementations§

source§

impl<P: ?Sized, T: Thin> AggregateRawPtr<*const T> for *const P

§

type Metadata = <P as Pointee>::Metadata

🔬This is a nightly-only experimental API. (core_intrinsics)
source§

impl<P: ?Sized, T: Thin> AggregateRawPtr<*mut T> for *mut P

§

type Metadata = <P as Pointee>::Metadata

🔬This is a nightly-only experimental API. (core_intrinsics)
1.0.0 · source§

impl<T: ?Sized> Clone for *const T

source§

fn clone(&self) -> Self

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
1.0.0 · source§

impl<T: ?Sized> Clone for *mut T

source§

fn clone(&self) -> Self

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
1.0.0 · source§

impl<T: ?Sized> Debug for *const T

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
1.0.0 · source§

impl<T: ?Sized> Debug for *mut T

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
1.23.0 · source§

impl<T> From<*mut T> for AtomicPtr<T>

source§

fn from(p: *mut T) -> Self

Converts a *mut T into an AtomicPtr<T>.

1.0.0 · source§

impl<T: ?Sized> Hash for *const T

source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · source§

fn hash_slice<H: Hasher>(data: &[Self], state: &mut H)
where Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
1.0.0 · source§

impl<T: ?Sized> Hash for *mut T

source§

fn hash<H: Hasher>(&self, state: &mut H)

Feeds this value into the given Hasher. Read more
1.3.0 · source§

fn hash_slice<H: Hasher>(data: &[Self], state: &mut H)
where Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
1.0.0 · source§

impl<T: ?Sized> Ord for *const T

source§

fn cmp(&self, other: &*const T) -> Ordering

This method returns an Ordering between self and other. Read more
1.21.0 · source§

fn max(self, other: Self) -> Self
where Self: Sized,

Compares and returns the maximum of two values. Read more
1.21.0 · source§

fn min(self, other: Self) -> Self
where Self: Sized,

Compares and returns the minimum of two values. Read more
1.50.0 · source§

fn clamp(self, min: Self, max: Self) -> Self
where Self: Sized + PartialOrd,

Restrict a value to a certain interval. Read more
1.0.0 · source§

impl<T: ?Sized> Ord for *mut T

source§

fn cmp(&self, other: &*mut T) -> Ordering

This method returns an Ordering between self and other. Read more
1.21.0 · source§

fn max(self, other: Self) -> Self
where Self: Sized,

Compares and returns the maximum of two values. Read more
1.21.0 · source§

fn min(self, other: Self) -> Self
where Self: Sized,

Compares and returns the minimum of two values. Read more
1.50.0 · source§

fn clamp(self, min: Self, max: Self) -> Self
where Self: Sized + PartialOrd,

Restrict a value to a certain interval. Read more
1.0.0 · source§

impl<T: ?Sized> PartialEq for *const T

source§

fn eq(&self, other: &*const T) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
1.0.0 · source§

impl<T: ?Sized> PartialEq for *mut T

source§

fn eq(&self, other: &*mut T) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
1.0.0 · source§

impl<T: ?Sized> PartialOrd for *const T

source§

fn partial_cmp(&self, other: &*const T) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
source§

fn lt(&self, other: &*const T) -> bool

This method tests less than (for self and other) and is used by the < operator. Read more
source§

fn le(&self, other: &*const T) -> bool

This method tests less than or equal to (for self and other) and is used by the <= operator. Read more
source§

fn gt(&self, other: &*const T) -> bool

This method tests greater than (for self and other) and is used by the > operator. Read more
source§

fn ge(&self, other: &*const T) -> bool

This method tests greater than or equal to (for self and other) and is used by the >= operator. Read more
1.0.0 · source§

impl<T: ?Sized> PartialOrd for *mut T

source§

fn partial_cmp(&self, other: &*mut T) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
source§

fn lt(&self, other: &*mut T) -> bool

This method tests less than (for self and other) and is used by the < operator. Read more
source§

fn le(&self, other: &*mut T) -> bool

This method tests less than or equal to (for self and other) and is used by the <= operator. Read more
source§

fn gt(&self, other: &*mut T) -> bool

This method tests greater than (for self and other) and is used by the > operator. Read more
source§

fn ge(&self, other: &*mut T) -> bool

This method tests greater than or equal to (for self and other) and is used by the >= operator. Read more
1.0.0 · source§

impl<T: ?Sized> Pointer for *const T

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
1.0.0 · source§

impl<T: ?Sized> Pointer for *mut T

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl<T> SimdElement for *const T
where T: Pointee<Metadata = ()>,

§

type Mask = isize

🔬This is a nightly-only experimental API. (portable_simd #86656)
The mask element type corresponding to this element type.
source§

impl<T> SimdElement for *mut T
where T: Pointee<Metadata = ()>,

§

type Mask = isize

🔬This is a nightly-only experimental API. (portable_simd #86656)
The mask element type corresponding to this element type.
source§

impl<'a, T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for &'a T

source§

impl<'a, T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for &'a mut T

source§

impl<T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for *const T

source§

impl<T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<*const U> for *mut T

source§

impl<'a, T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<*mut U> for &'a mut T

source§

impl<T: ?Sized + Unsize<U>, U: ?Sized> CoerceUnsized<*mut U> for *mut T

1.0.0 · source§

impl<T: ?Sized> Copy for *const T

1.0.0 · source§

impl<T: ?Sized> Copy for *mut T

source§

impl<T: ?Sized + Unsize<U>, U: ?Sized> DispatchFromDyn<*const U> for *const T

source§

impl<T: ?Sized + Unsize<U>, U: ?Sized> DispatchFromDyn<*mut U> for *mut T

1.0.0 · source§

impl<T: ?Sized> Eq for *const T

1.0.0 · source§

impl<T: ?Sized> Eq for *mut T

source§

impl<T: ?Sized> Freeze for *const T

source§

impl<T: ?Sized> Freeze for *mut T

1.0.0 · source§

impl<T: ?Sized> !Send for *const T

1.0.0 · source§

impl<T: ?Sized> !Send for *mut T

1.0.0 · source§

impl<T: ?Sized> !Sync for *const T

1.0.0 · source§

impl<T: ?Sized> !Sync for *mut T

1.38.0 · source§

impl<T: ?Sized> Unpin for *const T

1.38.0 · source§

impl<T: ?Sized> Unpin for *mut T

1.9.0 · source§

impl<T: RefUnwindSafe + ?Sized> UnwindSafe for *const T

1.9.0 · source§

impl<T: RefUnwindSafe + ?Sized> UnwindSafe for *mut T

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> CloneToUninit for T
where T: Clone,

source§

default unsafe fn clone_to_uninit(&self, dst: *mut T)

🔬This is a nightly-only experimental API. (clone_to_uninit #126799)
Performs copy-assignment from self to dst. Read more
source§

impl<T> CloneToUninit for T
where T: Copy,

source§

unsafe fn clone_to_uninit(&self, dst: *mut T)

🔬This is a nightly-only experimental API. (clone_to_uninit #126799)
Performs copy-assignment from self to dst. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.