pub struct Atomic<T>where
T: AtomicPrimitive,{ /* private fields */ }generic_atomic #130539)Expand description
A memory location which can be safely modified from multiple threads.
This has the same size and bit validity as the underlying type T. However,
the alignment of this type is always equal to its size, even on targets where
T has alignment less than its size.
For more about the differences between atomic types and non-atomic types as well as information about the portability of this type, please see the module-level documentation.
Note: This type is only available on platforms that support atomic loads
and stores of T.
Implementations§
Source§impl Atomic<bool>
impl Atomic<bool>
1.0.0 (const: 1.24.0) · Sourcepub const fn new(v: bool) -> Atomic<bool>
Available on target_has_atomic_load_store=8 only.
pub const fn new(v: bool) -> Atomic<bool>
target_has_atomic_load_store=8 only.Creates a new AtomicBool.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Atomic<bool>
Available on target_has_atomic_load_store=8 only.
pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Atomic<bool>
target_has_atomic_load_store=8 only.Creates a new AtomicBool from a pointer.
§Examples
use std::sync::atomic::{self, AtomicBool};
// Get a pointer to an allocated value
let ptr: *mut bool = Box::into_raw(Box::new(false));
assert!(ptr.cast::<AtomicBool>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe { AtomicBool::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(true, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, true);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicBool>()(note that this is always true, sincealign_of::<AtomicBool>() == 1).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.15.0 · Sourcepub fn get_mut(&mut self) -> &mut bool
Available on target_has_atomic_load_store=8 only.
pub fn get_mut(&mut self) -> &mut bool
target_has_atomic_load_store=8 only.Sourcepub fn from_mut(v: &mut bool) -> &mut Atomic<bool>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_load_store=8 and target_has_atomic_equal_alignment=8 only.
pub fn from_mut(v: &mut bool) -> &mut Atomic<bool>
atomic_from_mut #76314)target_has_atomic_load_store=8 and target_has_atomic_equal_alignment=8 only.Gets atomic access to a &mut bool.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<bool>]) -> &mut [bool]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_load_store=8 only.
pub fn get_mut_slice(this: &mut [Atomic<bool>]) -> &mut [bool]
atomic_from_mut #76314)target_has_atomic_load_store=8 only.Gets non-atomic access to a &mut [AtomicBool] slice.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicBool, Ordering};
let mut some_bools = [const { AtomicBool::new(false) }; 10];
let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
assert_eq!(view, [false; 10]);
view[..5].copy_from_slice(&[true; 5]);
std::thread::scope(|s| {
for t in &some_bools[..5] {
s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
}
for f in &some_bools[5..] {
s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
}
});Sourcepub fn from_mut_slice(v: &mut [bool]) -> &mut [Atomic<bool>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_load_store=8 and target_has_atomic_equal_alignment=8 only.
pub fn from_mut_slice(v: &mut [bool]) -> &mut [Atomic<bool>]
atomic_from_mut #76314)target_has_atomic_load_store=8 and target_has_atomic_equal_alignment=8 only.Gets atomic access to a &mut [bool] slice.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicBool, Ordering};
let mut some_bools = [false; 10];
let a = &*AtomicBool::from_mut_slice(&mut some_bools);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(true, Ordering::Relaxed));
}
});
assert_eq!(some_bools, [true; 10]);1.15.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> bool
Available on target_has_atomic_load_store=8 only.
pub const fn into_inner(self) -> bool
target_has_atomic_load_store=8 only.Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.0.0 · Sourcepub fn load(&self, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 only.
pub fn load(&self, order: Ordering) -> bool
target_has_atomic_load_store=8 only.1.0.0 · Sourcepub fn store(&self, val: bool, order: Ordering)
Available on target_has_atomic_load_store=8 only.
pub fn store(&self, val: bool, order: Ordering)
target_has_atomic_load_store=8 only.1.0.0 · Sourcepub fn swap(&self, val: bool, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn swap(&self, val: bool, order: Ordering) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Stores a value into the bool, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
1.0.0 · Sourcepub fn compare_and_swap(
&self,
current: bool,
new: bool,
order: Ordering,
) -> bool
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn compare_and_swap( &self, current: bool, new: bool, order: Ordering, ) -> bool
compare_exchange or compare_exchange_weak insteadtarget_has_atomic_load_store=8 and target_has_atomic=8 only.Stores a value into the bool if the current value is the same as the current value.
The return value is always the previous value. If it is equal to current, then the value
was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let some_bool = AtomicBool::new(true);
assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
assert_eq!(some_bool.load(Ordering::Relaxed), false);
assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
assert_eq!(some_bool.load(Ordering::Relaxed), false);1.10.0 · Sourcepub fn compare_exchange(
&self,
current: bool,
new: bool,
success: Ordering,
failure: Ordering,
) -> Result<bool, bool>
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn compare_exchange( &self, current: bool, new: bool, success: Ordering, failure: Ordering, ) -> Result<bool, bool>
target_has_atomic_load_store=8 and target_has_atomic=8 only.Stores a value into the bool if the current value is the same as the current value.
The return value is a result indicating whether the new value was written and containing
the previous value. On success this value is guaranteed to be equal to current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let some_bool = AtomicBool::new(true);
assert_eq!(some_bool.compare_exchange(true,
false,
Ordering::Acquire,
Ordering::Relaxed),
Ok(true));
assert_eq!(some_bool.load(Ordering::Relaxed), false);
assert_eq!(some_bool.compare_exchange(true, true,
Ordering::SeqCst,
Ordering::Acquire),
Err(false));
assert_eq!(some_bool.load(Ordering::Relaxed), false);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. In this case, compare_exchange can lead to the
ABA problem.
1.10.0 · Sourcepub fn compare_exchange_weak(
&self,
current: bool,
new: bool,
success: Ordering,
failure: Ordering,
) -> Result<bool, bool>
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn compare_exchange_weak( &self, current: bool, new: bool, success: Ordering, failure: Ordering, ) -> Result<bool, bool>
target_has_atomic_load_store=8 and target_has_atomic=8 only.Stores a value into the bool if the current value is the same as the current value.
Unlike AtomicBool::compare_exchange, this function is allowed to spuriously fail even when the
comparison succeeds, which can result in more efficient code on some platforms. The
return value is a result indicating whether the new value was written and containing the
previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let val = AtomicBool::new(false);
let new = true;
let mut old = val.load(Ordering::Relaxed);
loop {
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. In this case, compare_exchange can lead to the
ABA problem.
1.0.0 · Sourcepub fn fetch_and(&self, val: bool, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Logical “and” with a boolean value.
Performs a logical “and” operation on the current value and the argument val, and sets
the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), false);1.0.0 · Sourcepub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Logical “nand” with a boolean value.
Performs a logical “nand” operation on the current value and the argument val, and sets
the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), true);1.0.0 · Sourcepub fn fetch_or(&self, val: bool, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Logical “or” with a boolean value.
Performs a logical “or” operation on the current value and the argument val, and sets the
new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_or(true, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), false);1.0.0 · Sourcepub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Logical “xor” with a boolean value.
Performs a logical “xor” operation on the current value and the argument val, and sets
the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), false);1.81.0 · Sourcepub fn fetch_not(&self, order: Ordering) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn fetch_not(&self, order: Ordering) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Logical “not” with a boolean value.
Performs a logical “not” operation on the current value, and sets the new value to the result.
Returns the previous value.
fetch_not takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), true);1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut bool
Available on target_has_atomic_load_store=8 only.
pub const fn as_ptr(&self) -> *mut bool
target_has_atomic_load_store=8 only.Returns a mutable pointer to the underlying bool.
Doing non-atomic reads and writes on the resulting boolean can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut bool instead of &AtomicBool.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicBool;
extern "C" {
fn my_atomic_op(arg: *mut bool);
}
let mut atomic = AtomicBool::new(true);
unsafe {
my_atomic_op(atomic.as_ptr());
}1.53.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<bool, bool>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<bool, bool>
try_update for consistencytarget_has_atomic_load_store=8 and target_has_atomic=8 only.An alias for AtomicBool::try_update.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(bool) -> Option<bool>,
) -> Result<bool, bool>
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(bool) -> Option<bool>, ) -> Result<bool, bool>
target_has_atomic_load_store=8 and target_has_atomic=8 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function
returned Some(_), else Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been
changed from other threads in the meantime, as long as the function
returns Some(_), but the function will have been applied only once to
the stored value.
try_update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicBool::compare_exchange respectively.
Using Acquire as success ordering makes the store part of this
operation Relaxed, and using Release makes the final successful
load Relaxed. The (failed) load ordering can only be SeqCst,
Acquire or Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem.
§Examples
use std::sync::atomic::{AtomicBool, Ordering};
let x = AtomicBool::new(false);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
assert_eq!(x.load(Ordering::SeqCst), false);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(bool) -> bool,
) -> bool
Available on target_has_atomic_load_store=8 and target_has_atomic=8 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(bool) -> bool, ) -> bool
target_has_atomic_load_store=8 and target_has_atomic=8 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicBool::compare_exchange respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on u8.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem.
§Examples
Source§impl<T> Atomic<*mut T>
impl<T> Atomic<*mut T>
1.0.0 (const: 1.24.0) · Sourcepub const fn new(p: *mut T) -> Atomic<*mut T>
Available on target_has_atomic_load_store=ptr only.
pub const fn new(p: *mut T) -> Atomic<*mut T>
target_has_atomic_load_store=ptr only.Creates a new AtomicPtr.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Atomic<*mut T>
Available on target_has_atomic_load_store=ptr only.
pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Atomic<*mut T>
target_has_atomic_load_store=ptr only.Creates a new AtomicPtr from a pointer.
§Examples
use std::sync::atomic::{self, AtomicPtr};
// Get a pointer to an allocated value
let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert!(!unsafe { *ptr }.is_null());
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicPtr<T>>()(note that on some platforms this can be bigger thanalign_of::<*mut T>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
Sourcepub const fn null() -> Atomic<*mut T>
🔬This is a nightly-only experimental API. (atomic_ptr_null #150733)Available on target_has_atomic_load_store=ptr only.
pub const fn null() -> Atomic<*mut T>
atomic_ptr_null #150733)target_has_atomic_load_store=ptr only.Creates a new AtomicPtr initialized with a null pointer.
§Examples
1.15.0 · Sourcepub fn get_mut(&mut self) -> &mut *mut T
Available on target_has_atomic_load_store=ptr only.
pub fn get_mut(&mut self) -> &mut *mut T
target_has_atomic_load_store=ptr only.Returns a mutable reference to the underlying pointer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut *mut T) -> &mut Atomic<*mut T>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_load_store=ptr and target_has_atomic_equal_alignment=ptr only.
pub fn from_mut(v: &mut *mut T) -> &mut Atomic<*mut T>
atomic_from_mut #76314)target_has_atomic_load_store=ptr and target_has_atomic_equal_alignment=ptr only.Gets atomic access to a pointer.
Note: This function is only available on targets where AtomicPtr<T> has the same alignment as *const T
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<*mut T>]) -> &mut [*mut T]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_load_store=ptr only.
pub fn get_mut_slice(this: &mut [Atomic<*mut T>]) -> &mut [*mut T]
atomic_from_mut #76314)target_has_atomic_load_store=ptr only.Gets non-atomic access to a &mut [AtomicPtr] slice.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::ptr::null_mut;
use std::sync::atomic::{AtomicPtr, Ordering};
let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
assert_eq!(view, [null_mut::<String>(); 10]);
view
.iter_mut()
.enumerate()
.for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
std::thread::scope(|s| {
for ptr in &some_ptrs {
s.spawn(move || {
let ptr = ptr.load(Ordering::Relaxed);
assert!(!ptr.is_null());
let name = unsafe { Box::from_raw(ptr) };
println!("Hello, {name}!");
});
}
});Sourcepub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Atomic<*mut T>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_load_store=ptr and target_has_atomic_equal_alignment=ptr only.
pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Atomic<*mut T>]
atomic_from_mut #76314)target_has_atomic_load_store=ptr and target_has_atomic_equal_alignment=ptr only.Gets atomic access to a slice of pointers.
Note: This function is only available on targets where AtomicPtr<T> has the same alignment as *const T
§Examples
#![feature(atomic_from_mut)]
use std::ptr::null_mut;
use std::sync::atomic::{AtomicPtr, Ordering};
let mut some_ptrs = [null_mut::<String>(); 10];
let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || {
let name = Box::new(format!("thread{i}"));
a[i].store(Box::into_raw(name), Ordering::Relaxed);
});
}
});
for p in some_ptrs {
assert!(!p.is_null());
let name = unsafe { Box::from_raw(p) };
println!("Hello, {name}!");
}1.15.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> *mut T
Available on target_has_atomic_load_store=ptr only.
pub const fn into_inner(self) -> *mut T
target_has_atomic_load_store=ptr only.Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.0.0 · Sourcepub fn load(&self, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr only.
pub fn load(&self, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr only.1.0.0 · Sourcepub fn store(&self, ptr: *mut T, order: Ordering)
Available on target_has_atomic_load_store=ptr only.
pub fn store(&self, ptr: *mut T, order: Ordering)
target_has_atomic_load_store=ptr only.1.0.0 · Sourcepub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Stores a value into the pointer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
§Examples
1.0.0 · Sourcepub fn compare_and_swap(
&self,
current: *mut T,
new: *mut T,
order: Ordering,
) -> *mut T
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn compare_and_swap( &self, current: *mut T, new: *mut T, order: Ordering, ) -> *mut T
compare_exchange or compare_exchange_weak insteadtarget_has_atomic_load_store=ptr and target_has_atomic=ptr only.Stores a value into the pointer if the current value is the same as the current value.
The return value is always the previous value. If it is equal to current, then the value
was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
1.10.0 · Sourcepub fn compare_exchange(
&self,
current: *mut T,
new: *mut T,
success: Ordering,
failure: Ordering,
) -> Result<*mut T, *mut T>
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn compare_exchange( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Stores a value into the pointer if the current value is the same as the current value.
The return value is a result indicating whether the new value was written and containing
the previous value. On success this value is guaranteed to be equal to current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
§Examples
use std::sync::atomic::{AtomicPtr, Ordering};
let ptr = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let other_ptr = &mut 10;
let value = some_ptr.compare_exchange(ptr, other_ptr,
Ordering::SeqCst, Ordering::Relaxed);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.10.0 · Sourcepub fn compare_exchange_weak(
&self,
current: *mut T,
new: *mut T,
success: Ordering,
failure: Ordering,
) -> Result<*mut T, *mut T>
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn compare_exchange_weak( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Stores a value into the pointer if the current value is the same as the current value.
Unlike AtomicPtr::compare_exchange, this function is allowed to spuriously fail even when the
comparison succeeds, which can result in more efficient code on some platforms. The
return value is a result indicating whether the new value was written and containing the
previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
§Examples
use std::sync::atomic::{AtomicPtr, Ordering};
let some_ptr = AtomicPtr::new(&mut 5);
let new = &mut 10;
let mut old = some_ptr.load(Ordering::Relaxed);
loop {
match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.53.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<*mut T, *mut T>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<*mut T, *mut T>
try_update for consistencytarget_has_atomic_load_store=ptr and target_has_atomic=ptr only.An alias for AtomicPtr::try_update.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(*mut T) -> Option<*mut T>,
) -> Result<*mut T, *mut T>
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(*mut T) -> Option<*mut T>, ) -> Result<*mut T, *mut T>
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function
returned Some(_), else Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been
changed from other threads in the meantime, as long as the function
returns Some(_), but the function will have been applied only once to
the stored value.
try_update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicPtr::compare_exchange respectively.
Using Acquire as success ordering makes the store part of this
operation Relaxed, and using Release makes the final successful
load Relaxed. The (failed) load ordering can only be SeqCst,
Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem, which is a particularly common pitfall for pointers!
§Examples
use std::sync::atomic::{AtomicPtr, Ordering};
let ptr: *mut _ = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let new: *mut _ = &mut 10;
assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
if x == ptr {
Some(new)
} else {
None
}
});
assert_eq!(result, Ok(ptr));
assert_eq!(some_ptr.load(Ordering::SeqCst), new);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(*mut T) -> *mut T,
) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=8 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(*mut T) -> *mut T, ) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=8 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicPtr::compare_exchange respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem, which is a particularly common pitfall for pointers!
§Examples
1.91.0 · Sourcepub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Offsets the pointer’s address by adding val (in units of T),
returning the previous pointer.
This is equivalent to using wrapping_add to atomically perform the
equivalent of ptr = ptr.wrapping_add(val);.
This method operates in units of T, which means that it cannot be used
to offset the pointer by an amount which is not a multiple of
size_of::<T>(). This can sometimes be inconvenient, as you may want to
work with a deliberately misaligned pointer. In such cases, you may use
the fetch_byte_add method instead.
fetch_ptr_add takes an Ordering argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire makes the store part of this operation
Relaxed, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
§Examples
1.91.0 · Sourcepub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Offsets the pointer’s address by subtracting val (in units of T),
returning the previous pointer.
This is equivalent to using wrapping_sub to atomically perform the
equivalent of ptr = ptr.wrapping_sub(val);.
This method operates in units of T, which means that it cannot be used
to offset the pointer by an amount which is not a multiple of
size_of::<T>(). This can sometimes be inconvenient, as you may want to
work with a deliberately misaligned pointer. In such cases, you may use
the fetch_byte_sub method instead.
fetch_ptr_sub takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
§Examples
1.91.0 · Sourcepub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Offsets the pointer’s address by adding val bytes, returning the
previous pointer.
This is equivalent to using wrapping_byte_add to atomically
perform ptr = ptr.wrapping_byte_add(val).
fetch_byte_add takes an Ordering argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire makes the store part of this operation
Relaxed, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
§Examples
1.91.0 · Sourcepub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Offsets the pointer’s address by subtracting val bytes, returning the
previous pointer.
This is equivalent to using wrapping_byte_sub to atomically
perform ptr = ptr.wrapping_byte_sub(val).
fetch_byte_sub takes an Ordering argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire makes the store part of this operation
Relaxed, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
§Examples
1.91.0 · Sourcepub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Performs a bitwise “or” operation on the address of the current pointer,
and the argument val, and stores a pointer with provenance of the
current pointer and the resulting address.
This is equivalent to using map_addr to atomically perform
ptr = ptr.map_addr(|a| a | val). This can be used in tagged
pointer schemes to atomically set tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr. For
example: a.fetch_or(val).map_addr(|a| a | val).
fetch_or takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr for
details.
§Examples
use core::sync::atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
let atom = AtomicPtr::<i64>::new(pointer);
// Tag the bottom bit of the pointer.
assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
// Extract and untag.
let tagged = atom.load(Ordering::Relaxed);
assert_eq!(tagged.addr() & 1, 1);
assert_eq!(tagged.map_addr(|p| p & !1), pointer);1.91.0 · Sourcepub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Performs a bitwise “and” operation on the address of the current
pointer, and the argument val, and stores a pointer with provenance of
the current pointer and the resulting address.
This is equivalent to using map_addr to atomically perform
ptr = ptr.map_addr(|a| a & val). This can be used in tagged
pointer schemes to atomically unset tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr. For
example: a.fetch_and(val).map_addr(|a| a & val).
fetch_and takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr for
details.
§Examples
use core::sync::atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
// A tagged pointer
let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
// Untag, and extract the previously tagged pointer.
let untagged = atom.fetch_and(!1, Ordering::Relaxed)
.map_addr(|a| a & !1);
assert_eq!(untagged, pointer);1.91.0 · Sourcepub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
Available on target_has_atomic_load_store=ptr and target_has_atomic=ptr only.
pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
target_has_atomic_load_store=ptr and target_has_atomic=ptr only.Performs a bitwise “xor” operation on the address of the current
pointer, and the argument val, and stores a pointer with provenance of
the current pointer and the resulting address.
This is equivalent to using map_addr to atomically perform
ptr = ptr.map_addr(|a| a ^ val). This can be used in tagged
pointer schemes to atomically toggle tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr. For
example: a.fetch_xor(val).map_addr(|a| a ^ val).
fetch_xor takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr for
details.
§Examples
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut *mut T
Available on target_has_atomic_load_store=ptr only.
pub const fn as_ptr(&self) -> *mut *mut T
target_has_atomic_load_store=ptr only.Returns a mutable pointer to the underlying pointer.
Doing non-atomic reads and writes on the resulting pointer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut *mut T instead of &AtomicPtr<T>.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicPtr;
extern "C" {
fn my_atomic_op(arg: *mut *mut u32);
}
let mut value = 17;
let atomic = AtomicPtr::new(&mut value);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<i8>
impl Atomic<i8>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: i8) -> Atomic<i8>
pub const fn new(v: i8) -> Atomic<i8>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i8) -> &'a Atomic<i8>
pub const unsafe fn from_ptr<'a>(ptr: *mut i8) -> &'a Atomic<i8>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicI8};
// Get a pointer to an allocated value
let ptr: *mut i8 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI8>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI8::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicI8>()(note that this is always true, sincealign_of::<AtomicI8>() == 1).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut i8
pub fn get_mut(&mut self) -> &mut i8
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut i8) -> &mut Atomic<i8>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=8 only.
pub fn from_mut(v: &mut i8) -> &mut Atomic<i8>
atomic_from_mut #76314)target_has_atomic_equal_alignment=8 only.Get atomic access to a &mut i8.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<i8>]) -> &mut [i8]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<i8>]) -> &mut [i8]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI8] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI8, Ordering};
let mut some_ints = [const { AtomicI8::new(0) }; 10];
let view: &mut [i8] = AtomicI8::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i8]) -> &mut [Atomic<i8>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=8 only.
pub fn from_mut_slice(v: &mut [i8]) -> &mut [Atomic<i8>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=8 only.Get atomic access to a &mut [i8] slice.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI8, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI8::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> i8
pub const fn into_inner(self) -> i8
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> i8
pub fn load(&self, order: Ordering) -> i8
1.34.0 · Sourcepub fn store(&self, val: i8, order: Ordering)
pub fn store(&self, val: i8, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn swap(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: i8, new: i8, order: Ordering) -> i8
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=8 only.
pub fn compare_and_swap(&self, current: i8, new: i8, order: Ordering) -> i8
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=8 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicI8, Ordering};
let some_var = AtomicI8::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: i8,
new: i8,
success: Ordering,
failure: Ordering,
) -> Result<i8, i8>
Available on target_has_atomic=8 only.
pub fn compare_exchange( &self, current: i8, new: i8, success: Ordering, failure: Ordering, ) -> Result<i8, i8>
target_has_atomic=8 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
use std::sync::atomic::{AtomicI8, Ordering};
let some_var = AtomicI8::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: i8,
new: i8,
success: Ordering,
failure: Ordering,
) -> Result<i8, i8>
Available on target_has_atomic=8 only.
pub fn compare_exchange_weak( &self, current: i8, new: i8, success: Ordering, failure: Ordering, ) -> Result<i8, i8>
target_has_atomic=8 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI8::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
use std::sync::atomic::{AtomicI8, Ordering};
let val = AtomicI8::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_add(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_sub(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_and(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_nand(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_or(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_xor(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i8, i8>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=8 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i8, i8>
try_update for consistencytarget_has_atomic=8 only.An alias for
AtomicI8::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i8) -> Option<i8>,
) -> Result<i8, i8>
Available on target_has_atomic=8 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i8) -> Option<i8>, ) -> Result<i8, i8>
target_has_atomic=8 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicI8, Ordering};
let x = AtomicI8::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i8) -> i8,
) -> i8
Available on target_has_atomic=8 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i8) -> i8, ) -> i8
target_has_atomic=8 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_max(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
use std::sync::atomic::{AtomicI8, Ordering};
let foo = AtomicI8::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: i8, order: Ordering) -> i8
Available on target_has_atomic=8 only.
pub fn fetch_min(&self, val: i8, order: Ordering) -> i8
target_has_atomic=8 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
§Examples
use std::sync::atomic::{AtomicI8, Ordering};
let foo = AtomicI8::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut i8
pub const fn as_ptr(&self) -> *mut i8
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i8 instead of &AtomicI8.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicI8;
extern "C" {
fn my_atomic_op(arg: *mut i8);
}
let atomic = AtomicI8::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<u8>
impl Atomic<u8>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: u8) -> Atomic<u8>
pub const fn new(v: u8) -> Atomic<u8>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u8) -> &'a Atomic<u8>
pub const unsafe fn from_ptr<'a>(ptr: *mut u8) -> &'a Atomic<u8>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicU8};
// Get a pointer to an allocated value
let ptr: *mut u8 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU8>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU8::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicU8>()(note that this is always true, sincealign_of::<AtomicU8>() == 1).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut u8
pub fn get_mut(&mut self) -> &mut u8
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut u8) -> &mut Atomic<u8>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=8 only.
pub fn from_mut(v: &mut u8) -> &mut Atomic<u8>
atomic_from_mut #76314)target_has_atomic_equal_alignment=8 only.Get atomic access to a &mut u8.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<u8>]) -> &mut [u8] ⓘ
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<u8>]) -> &mut [u8] ⓘ
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU8] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU8, Ordering};
let mut some_ints = [const { AtomicU8::new(0) }; 10];
let view: &mut [u8] = AtomicU8::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u8]) -> &mut [Atomic<u8>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=8 only.
pub fn from_mut_slice(v: &mut [u8]) -> &mut [Atomic<u8>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=8 only.Get atomic access to a &mut [u8] slice.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU8, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU8::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> u8
pub const fn into_inner(self) -> u8
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> u8
pub fn load(&self, order: Ordering) -> u8
1.34.0 · Sourcepub fn store(&self, val: u8, order: Ordering)
pub fn store(&self, val: u8, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn swap(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: u8, new: u8, order: Ordering) -> u8
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=8 only.
pub fn compare_and_swap(&self, current: u8, new: u8, order: Ordering) -> u8
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=8 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicU8, Ordering};
let some_var = AtomicU8::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: u8,
new: u8,
success: Ordering,
failure: Ordering,
) -> Result<u8, u8>
Available on target_has_atomic=8 only.
pub fn compare_exchange( &self, current: u8, new: u8, success: Ordering, failure: Ordering, ) -> Result<u8, u8>
target_has_atomic=8 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
use std::sync::atomic::{AtomicU8, Ordering};
let some_var = AtomicU8::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: u8,
new: u8,
success: Ordering,
failure: Ordering,
) -> Result<u8, u8>
Available on target_has_atomic=8 only.
pub fn compare_exchange_weak( &self, current: u8, new: u8, success: Ordering, failure: Ordering, ) -> Result<u8, u8>
target_has_atomic=8 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU8::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
use std::sync::atomic::{AtomicU8, Ordering};
let val = AtomicU8::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_add(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_sub(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_and(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_nand(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_or(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_xor(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u8, u8>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=8 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u8, u8>
try_update for consistencytarget_has_atomic=8 only.An alias for
AtomicU8::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u8) -> Option<u8>,
) -> Result<u8, u8>
Available on target_has_atomic=8 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u8) -> Option<u8>, ) -> Result<u8, u8>
target_has_atomic=8 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicU8, Ordering};
let x = AtomicU8::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u8) -> u8,
) -> u8
Available on target_has_atomic=8 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u8) -> u8, ) -> u8
target_has_atomic=8 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_max(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
use std::sync::atomic::{AtomicU8, Ordering};
let foo = AtomicU8::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: u8, order: Ordering) -> u8
Available on target_has_atomic=8 only.
pub fn fetch_min(&self, val: u8, order: Ordering) -> u8
target_has_atomic=8 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
§Examples
use std::sync::atomic::{AtomicU8, Ordering};
let foo = AtomicU8::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut u8
pub const fn as_ptr(&self) -> *mut u8
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u8 instead of &AtomicU8.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicU8;
extern "C" {
fn my_atomic_op(arg: *mut u8);
}
let atomic = AtomicU8::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<i16>
impl Atomic<i16>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: i16) -> Atomic<i16>
pub const fn new(v: i16) -> Atomic<i16>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i16) -> &'a Atomic<i16>
pub const unsafe fn from_ptr<'a>(ptr: *mut i16) -> &'a Atomic<i16>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicI16};
// Get a pointer to an allocated value
let ptr: *mut i16 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI16>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI16::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicI16>()(note that on some platforms this can be bigger thanalign_of::<i16>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut i16
pub fn get_mut(&mut self) -> &mut i16
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut i16) -> &mut Atomic<i16>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=16 only.
pub fn from_mut(v: &mut i16) -> &mut Atomic<i16>
atomic_from_mut #76314)target_has_atomic_equal_alignment=16 only.Get atomic access to a &mut i16.
Note: This function is only available on targets where AtomicI16 has the same alignment as i16.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<i16>]) -> &mut [i16]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<i16>]) -> &mut [i16]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI16] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI16, Ordering};
let mut some_ints = [const { AtomicI16::new(0) }; 10];
let view: &mut [i16] = AtomicI16::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i16]) -> &mut [Atomic<i16>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=16 only.
pub fn from_mut_slice(v: &mut [i16]) -> &mut [Atomic<i16>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=16 only.Get atomic access to a &mut [i16] slice.
Note: This function is only available on targets where AtomicI16 has the same alignment as i16.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI16, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI16::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> i16
pub const fn into_inner(self) -> i16
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> i16
pub fn load(&self, order: Ordering) -> i16
1.34.0 · Sourcepub fn store(&self, val: i16, order: Ordering)
pub fn store(&self, val: i16, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn swap(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: i16, new: i16, order: Ordering) -> i16
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=16 only.
pub fn compare_and_swap(&self, current: i16, new: i16, order: Ordering) -> i16
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=16 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicI16, Ordering};
let some_var = AtomicI16::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: i16,
new: i16,
success: Ordering,
failure: Ordering,
) -> Result<i16, i16>
Available on target_has_atomic=16 only.
pub fn compare_exchange( &self, current: i16, new: i16, success: Ordering, failure: Ordering, ) -> Result<i16, i16>
target_has_atomic=16 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
use std::sync::atomic::{AtomicI16, Ordering};
let some_var = AtomicI16::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: i16,
new: i16,
success: Ordering,
failure: Ordering,
) -> Result<i16, i16>
Available on target_has_atomic=16 only.
pub fn compare_exchange_weak( &self, current: i16, new: i16, success: Ordering, failure: Ordering, ) -> Result<i16, i16>
target_has_atomic=16 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI16::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
use std::sync::atomic::{AtomicI16, Ordering};
let val = AtomicI16::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_add(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_sub(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_and(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_nand(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_or(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_xor(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i16, i16>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=16 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i16, i16>
try_update for consistencytarget_has_atomic=16 only.An alias for
AtomicI16::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i16) -> Option<i16>,
) -> Result<i16, i16>
Available on target_has_atomic=16 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i16) -> Option<i16>, ) -> Result<i16, i16>
target_has_atomic=16 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicI16, Ordering};
let x = AtomicI16::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i16) -> i16,
) -> i16
Available on target_has_atomic=16 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i16) -> i16, ) -> i16
target_has_atomic=16 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_max(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
use std::sync::atomic::{AtomicI16, Ordering};
let foo = AtomicI16::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: i16, order: Ordering) -> i16
Available on target_has_atomic=16 only.
pub fn fetch_min(&self, val: i16, order: Ordering) -> i16
target_has_atomic=16 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
§Examples
use std::sync::atomic::{AtomicI16, Ordering};
let foo = AtomicI16::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut i16
pub const fn as_ptr(&self) -> *mut i16
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i16 instead of &AtomicI16.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicI16;
extern "C" {
fn my_atomic_op(arg: *mut i16);
}
let atomic = AtomicI16::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<u16>
impl Atomic<u16>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: u16) -> Atomic<u16>
pub const fn new(v: u16) -> Atomic<u16>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u16) -> &'a Atomic<u16>
pub const unsafe fn from_ptr<'a>(ptr: *mut u16) -> &'a Atomic<u16>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicU16};
// Get a pointer to an allocated value
let ptr: *mut u16 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU16>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU16::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicU16>()(note that on some platforms this can be bigger thanalign_of::<u16>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut u16
pub fn get_mut(&mut self) -> &mut u16
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut u16) -> &mut Atomic<u16>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=16 only.
pub fn from_mut(v: &mut u16) -> &mut Atomic<u16>
atomic_from_mut #76314)target_has_atomic_equal_alignment=16 only.Get atomic access to a &mut u16.
Note: This function is only available on targets where AtomicU16 has the same alignment as u16.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<u16>]) -> &mut [u16]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<u16>]) -> &mut [u16]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU16] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU16, Ordering};
let mut some_ints = [const { AtomicU16::new(0) }; 10];
let view: &mut [u16] = AtomicU16::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u16]) -> &mut [Atomic<u16>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=16 only.
pub fn from_mut_slice(v: &mut [u16]) -> &mut [Atomic<u16>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=16 only.Get atomic access to a &mut [u16] slice.
Note: This function is only available on targets where AtomicU16 has the same alignment as u16.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU16, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU16::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> u16
pub const fn into_inner(self) -> u16
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> u16
pub fn load(&self, order: Ordering) -> u16
1.34.0 · Sourcepub fn store(&self, val: u16, order: Ordering)
pub fn store(&self, val: u16, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn swap(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: u16, new: u16, order: Ordering) -> u16
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=16 only.
pub fn compare_and_swap(&self, current: u16, new: u16, order: Ordering) -> u16
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=16 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicU16, Ordering};
let some_var = AtomicU16::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: u16,
new: u16,
success: Ordering,
failure: Ordering,
) -> Result<u16, u16>
Available on target_has_atomic=16 only.
pub fn compare_exchange( &self, current: u16, new: u16, success: Ordering, failure: Ordering, ) -> Result<u16, u16>
target_has_atomic=16 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
use std::sync::atomic::{AtomicU16, Ordering};
let some_var = AtomicU16::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: u16,
new: u16,
success: Ordering,
failure: Ordering,
) -> Result<u16, u16>
Available on target_has_atomic=16 only.
pub fn compare_exchange_weak( &self, current: u16, new: u16, success: Ordering, failure: Ordering, ) -> Result<u16, u16>
target_has_atomic=16 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU16::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
use std::sync::atomic::{AtomicU16, Ordering};
let val = AtomicU16::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_add(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_sub(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_and(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_nand(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_or(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_xor(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u16, u16>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=16 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u16, u16>
try_update for consistencytarget_has_atomic=16 only.An alias for
AtomicU16::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u16) -> Option<u16>,
) -> Result<u16, u16>
Available on target_has_atomic=16 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u16) -> Option<u16>, ) -> Result<u16, u16>
target_has_atomic=16 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicU16, Ordering};
let x = AtomicU16::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u16) -> u16,
) -> u16
Available on target_has_atomic=16 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u16) -> u16, ) -> u16
target_has_atomic=16 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_max(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
use std::sync::atomic::{AtomicU16, Ordering};
let foo = AtomicU16::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: u16, order: Ordering) -> u16
Available on target_has_atomic=16 only.
pub fn fetch_min(&self, val: u16, order: Ordering) -> u16
target_has_atomic=16 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
§Examples
use std::sync::atomic::{AtomicU16, Ordering};
let foo = AtomicU16::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut u16
pub const fn as_ptr(&self) -> *mut u16
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u16 instead of &AtomicU16.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicU16;
extern "C" {
fn my_atomic_op(arg: *mut u16);
}
let atomic = AtomicU16::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<i32>
impl Atomic<i32>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: i32) -> Atomic<i32>
pub const fn new(v: i32) -> Atomic<i32>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i32) -> &'a Atomic<i32>
pub const unsafe fn from_ptr<'a>(ptr: *mut i32) -> &'a Atomic<i32>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicI32};
// Get a pointer to an allocated value
let ptr: *mut i32 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI32>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI32::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicI32>()(note that on some platforms this can be bigger thanalign_of::<i32>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut i32
pub fn get_mut(&mut self) -> &mut i32
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut i32) -> &mut Atomic<i32>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=32 only.
pub fn from_mut(v: &mut i32) -> &mut Atomic<i32>
atomic_from_mut #76314)target_has_atomic_equal_alignment=32 only.Get atomic access to a &mut i32.
Note: This function is only available on targets where AtomicI32 has the same alignment as i32.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<i32>]) -> &mut [i32]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<i32>]) -> &mut [i32]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI32] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI32, Ordering};
let mut some_ints = [const { AtomicI32::new(0) }; 10];
let view: &mut [i32] = AtomicI32::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i32]) -> &mut [Atomic<i32>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=32 only.
pub fn from_mut_slice(v: &mut [i32]) -> &mut [Atomic<i32>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=32 only.Get atomic access to a &mut [i32] slice.
Note: This function is only available on targets where AtomicI32 has the same alignment as i32.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI32, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI32::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> i32
pub const fn into_inner(self) -> i32
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> i32
pub fn load(&self, order: Ordering) -> i32
1.34.0 · Sourcepub fn store(&self, val: i32, order: Ordering)
pub fn store(&self, val: i32, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn swap(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: i32, new: i32, order: Ordering) -> i32
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=32 only.
pub fn compare_and_swap(&self, current: i32, new: i32, order: Ordering) -> i32
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=32 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
let some_var = AtomicI32::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: i32,
new: i32,
success: Ordering,
failure: Ordering,
) -> Result<i32, i32>
Available on target_has_atomic=32 only.
pub fn compare_exchange( &self, current: i32, new: i32, success: Ordering, failure: Ordering, ) -> Result<i32, i32>
target_has_atomic=32 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
let some_var = AtomicI32::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: i32,
new: i32,
success: Ordering,
failure: Ordering,
) -> Result<i32, i32>
Available on target_has_atomic=32 only.
pub fn compare_exchange_weak( &self, current: i32, new: i32, success: Ordering, failure: Ordering, ) -> Result<i32, i32>
target_has_atomic=32 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI32::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
let val = AtomicI32::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_add(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_sub(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_and(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_nand(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_or(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_xor(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i32, i32>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=32 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i32, i32>
try_update for consistencytarget_has_atomic=32 only.An alias for
AtomicI32::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i32) -> Option<i32>,
) -> Result<i32, i32>
Available on target_has_atomic=32 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i32) -> Option<i32>, ) -> Result<i32, i32>
target_has_atomic=32 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
let x = AtomicI32::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i32) -> i32,
) -> i32
Available on target_has_atomic=32 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i32) -> i32, ) -> i32
target_has_atomic=32 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_max(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
let foo = AtomicI32::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: i32, order: Ordering) -> i32
Available on target_has_atomic=32 only.
pub fn fetch_min(&self, val: i32, order: Ordering) -> i32
target_has_atomic=32 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
let foo = AtomicI32::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut i32
pub const fn as_ptr(&self) -> *mut i32
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i32 instead of &AtomicI32.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicI32;
extern "C" {
fn my_atomic_op(arg: *mut i32);
}
let atomic = AtomicI32::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<u32>
impl Atomic<u32>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: u32) -> Atomic<u32>
pub const fn new(v: u32) -> Atomic<u32>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u32) -> &'a Atomic<u32>
pub const unsafe fn from_ptr<'a>(ptr: *mut u32) -> &'a Atomic<u32>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicU32};
// Get a pointer to an allocated value
let ptr: *mut u32 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU32>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU32::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicU32>()(note that on some platforms this can be bigger thanalign_of::<u32>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut u32
pub fn get_mut(&mut self) -> &mut u32
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut u32) -> &mut Atomic<u32>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=32 only.
pub fn from_mut(v: &mut u32) -> &mut Atomic<u32>
atomic_from_mut #76314)target_has_atomic_equal_alignment=32 only.Get atomic access to a &mut u32.
Note: This function is only available on targets where AtomicU32 has the same alignment as u32.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<u32>]) -> &mut [u32]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<u32>]) -> &mut [u32]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU32] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU32, Ordering};
let mut some_ints = [const { AtomicU32::new(0) }; 10];
let view: &mut [u32] = AtomicU32::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u32]) -> &mut [Atomic<u32>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=32 only.
pub fn from_mut_slice(v: &mut [u32]) -> &mut [Atomic<u32>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=32 only.Get atomic access to a &mut [u32] slice.
Note: This function is only available on targets where AtomicU32 has the same alignment as u32.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU32, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU32::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> u32
pub const fn into_inner(self) -> u32
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> u32
pub fn load(&self, order: Ordering) -> u32
1.34.0 · Sourcepub fn store(&self, val: u32, order: Ordering)
pub fn store(&self, val: u32, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn swap(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: u32, new: u32, order: Ordering) -> u32
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=32 only.
pub fn compare_and_swap(&self, current: u32, new: u32, order: Ordering) -> u32
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=32 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicU32, Ordering};
let some_var = AtomicU32::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: u32,
new: u32,
success: Ordering,
failure: Ordering,
) -> Result<u32, u32>
Available on target_has_atomic=32 only.
pub fn compare_exchange( &self, current: u32, new: u32, success: Ordering, failure: Ordering, ) -> Result<u32, u32>
target_has_atomic=32 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
use std::sync::atomic::{AtomicU32, Ordering};
let some_var = AtomicU32::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: u32,
new: u32,
success: Ordering,
failure: Ordering,
) -> Result<u32, u32>
Available on target_has_atomic=32 only.
pub fn compare_exchange_weak( &self, current: u32, new: u32, success: Ordering, failure: Ordering, ) -> Result<u32, u32>
target_has_atomic=32 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU32::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
use std::sync::atomic::{AtomicU32, Ordering};
let val = AtomicU32::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_add(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_sub(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_and(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_nand(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_or(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_xor(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u32, u32>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=32 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u32, u32>
try_update for consistencytarget_has_atomic=32 only.An alias for
AtomicU32::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u32) -> Option<u32>,
) -> Result<u32, u32>
Available on target_has_atomic=32 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u32) -> Option<u32>, ) -> Result<u32, u32>
target_has_atomic=32 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicU32, Ordering};
let x = AtomicU32::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u32) -> u32,
) -> u32
Available on target_has_atomic=32 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u32) -> u32, ) -> u32
target_has_atomic=32 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_max(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
use std::sync::atomic::{AtomicU32, Ordering};
let foo = AtomicU32::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: u32, order: Ordering) -> u32
Available on target_has_atomic=32 only.
pub fn fetch_min(&self, val: u32, order: Ordering) -> u32
target_has_atomic=32 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
§Examples
use std::sync::atomic::{AtomicU32, Ordering};
let foo = AtomicU32::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut u32
pub const fn as_ptr(&self) -> *mut u32
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u32 instead of &AtomicU32.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicU32;
extern "C" {
fn my_atomic_op(arg: *mut u32);
}
let atomic = AtomicU32::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<i64>
impl Atomic<i64>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: i64) -> Atomic<i64>
pub const fn new(v: i64) -> Atomic<i64>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i64) -> &'a Atomic<i64>
pub const unsafe fn from_ptr<'a>(ptr: *mut i64) -> &'a Atomic<i64>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicI64};
// Get a pointer to an allocated value
let ptr: *mut i64 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI64>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI64::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicI64>()(note that on some platforms this can be bigger thanalign_of::<i64>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut i64
pub fn get_mut(&mut self) -> &mut i64
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut i64) -> &mut Atomic<i64>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=64 only.
pub fn from_mut(v: &mut i64) -> &mut Atomic<i64>
atomic_from_mut #76314)target_has_atomic_equal_alignment=64 only.Get atomic access to a &mut i64.
Note: This function is only available on targets where AtomicI64 has the same alignment as i64.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<i64>]) -> &mut [i64]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<i64>]) -> &mut [i64]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI64] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI64, Ordering};
let mut some_ints = [const { AtomicI64::new(0) }; 10];
let view: &mut [i64] = AtomicI64::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i64]) -> &mut [Atomic<i64>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=64 only.
pub fn from_mut_slice(v: &mut [i64]) -> &mut [Atomic<i64>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=64 only.Get atomic access to a &mut [i64] slice.
Note: This function is only available on targets where AtomicI64 has the same alignment as i64.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI64, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI64::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> i64
pub const fn into_inner(self) -> i64
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> i64
pub fn load(&self, order: Ordering) -> i64
1.34.0 · Sourcepub fn store(&self, val: i64, order: Ordering)
pub fn store(&self, val: i64, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn swap(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: i64, new: i64, order: Ordering) -> i64
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=64 only.
pub fn compare_and_swap(&self, current: i64, new: i64, order: Ordering) -> i64
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=64 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicI64, Ordering};
let some_var = AtomicI64::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: i64,
new: i64,
success: Ordering,
failure: Ordering,
) -> Result<i64, i64>
Available on target_has_atomic=64 only.
pub fn compare_exchange( &self, current: i64, new: i64, success: Ordering, failure: Ordering, ) -> Result<i64, i64>
target_has_atomic=64 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
use std::sync::atomic::{AtomicI64, Ordering};
let some_var = AtomicI64::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: i64,
new: i64,
success: Ordering,
failure: Ordering,
) -> Result<i64, i64>
Available on target_has_atomic=64 only.
pub fn compare_exchange_weak( &self, current: i64, new: i64, success: Ordering, failure: Ordering, ) -> Result<i64, i64>
target_has_atomic=64 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI64::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
use std::sync::atomic::{AtomicI64, Ordering};
let val = AtomicI64::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_add(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_sub(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_and(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_nand(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_or(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_xor(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i64, i64>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=64 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i64, i64>
try_update for consistencytarget_has_atomic=64 only.An alias for
AtomicI64::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i64) -> Option<i64>,
) -> Result<i64, i64>
Available on target_has_atomic=64 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i64) -> Option<i64>, ) -> Result<i64, i64>
target_has_atomic=64 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicI64, Ordering};
let x = AtomicI64::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i64) -> i64,
) -> i64
Available on target_has_atomic=64 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i64) -> i64, ) -> i64
target_has_atomic=64 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_max(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
use std::sync::atomic::{AtomicI64, Ordering};
let foo = AtomicI64::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: i64, order: Ordering) -> i64
Available on target_has_atomic=64 only.
pub fn fetch_min(&self, val: i64, order: Ordering) -> i64
target_has_atomic=64 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
§Examples
use std::sync::atomic::{AtomicI64, Ordering};
let foo = AtomicI64::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut i64
pub const fn as_ptr(&self) -> *mut i64
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i64 instead of &AtomicI64.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicI64;
extern "C" {
fn my_atomic_op(arg: *mut i64);
}
let atomic = AtomicI64::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<u64>
impl Atomic<u64>
1.34.0 (const: 1.34.0) · Sourcepub const fn new(v: u64) -> Atomic<u64>
pub const fn new(v: u64) -> Atomic<u64>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u64) -> &'a Atomic<u64>
pub const unsafe fn from_ptr<'a>(ptr: *mut u64) -> &'a Atomic<u64>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicU64};
// Get a pointer to an allocated value
let ptr: *mut u64 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU64>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU64::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicU64>()(note that on some platforms this can be bigger thanalign_of::<u64>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 · Sourcepub fn get_mut(&mut self) -> &mut u64
pub fn get_mut(&mut self) -> &mut u64
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut u64) -> &mut Atomic<u64>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=64 only.
pub fn from_mut(v: &mut u64) -> &mut Atomic<u64>
atomic_from_mut #76314)target_has_atomic_equal_alignment=64 only.Get atomic access to a &mut u64.
Note: This function is only available on targets where AtomicU64 has the same alignment as u64.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<u64>]) -> &mut [u64]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<u64>]) -> &mut [u64]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU64] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU64, Ordering};
let mut some_ints = [const { AtomicU64::new(0) }; 10];
let view: &mut [u64] = AtomicU64::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u64]) -> &mut [Atomic<u64>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=64 only.
pub fn from_mut_slice(v: &mut [u64]) -> &mut [Atomic<u64>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=64 only.Get atomic access to a &mut [u64] slice.
Note: This function is only available on targets where AtomicU64 has the same alignment as u64.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU64, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU64::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> u64
pub const fn into_inner(self) -> u64
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.34.0 · Sourcepub fn load(&self, order: Ordering) -> u64
pub fn load(&self, order: Ordering) -> u64
1.34.0 · Sourcepub fn store(&self, val: u64, order: Ordering)
pub fn store(&self, val: u64, order: Ordering)
1.34.0 · Sourcepub fn swap(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn swap(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.34.0 · Sourcepub fn compare_and_swap(&self, current: u64, new: u64, order: Ordering) -> u64
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=64 only.
pub fn compare_and_swap(&self, current: u64, new: u64, order: Ordering) -> u64
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=64 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicU64, Ordering};
let some_var = AtomicU64::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 · Sourcepub fn compare_exchange(
&self,
current: u64,
new: u64,
success: Ordering,
failure: Ordering,
) -> Result<u64, u64>
Available on target_has_atomic=64 only.
pub fn compare_exchange( &self, current: u64, new: u64, success: Ordering, failure: Ordering, ) -> Result<u64, u64>
target_has_atomic=64 only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
use std::sync::atomic::{AtomicU64, Ordering};
let some_var = AtomicU64::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn compare_exchange_weak(
&self,
current: u64,
new: u64,
success: Ordering,
failure: Ordering,
) -> Result<u64, u64>
Available on target_has_atomic=64 only.
pub fn compare_exchange_weak( &self, current: u64, new: u64, success: Ordering, failure: Ordering, ) -> Result<u64, u64>
target_has_atomic=64 only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU64::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
use std::sync::atomic::{AtomicU64, Ordering};
let val = AtomicU64::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 · Sourcepub fn fetch_add(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_add(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.34.0 · Sourcepub fn fetch_sub(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_sub(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.34.0 · Sourcepub fn fetch_and(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_and(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.34.0 · Sourcepub fn fetch_nand(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_nand(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.34.0 · Sourcepub fn fetch_or(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_or(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.34.0 · Sourcepub fn fetch_xor(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_xor(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u64, u64>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=64 only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u64, u64>
try_update for consistencytarget_has_atomic=64 only.An alias for
AtomicU64::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u64) -> Option<u64>,
) -> Result<u64, u64>
Available on target_has_atomic=64 only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u64) -> Option<u64>, ) -> Result<u64, u64>
target_has_atomic=64 only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicU64, Ordering};
let x = AtomicU64::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u64) -> u64,
) -> u64
Available on target_has_atomic=64 only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u64) -> u64, ) -> u64
target_has_atomic=64 only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_max(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
use std::sync::atomic::{AtomicU64, Ordering};
let foo = AtomicU64::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: u64, order: Ordering) -> u64
Available on target_has_atomic=64 only.
pub fn fetch_min(&self, val: u64, order: Ordering) -> u64
target_has_atomic=64 only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
§Examples
use std::sync::atomic::{AtomicU64, Ordering};
let foo = AtomicU64::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut u64
pub const fn as_ptr(&self) -> *mut u64
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u64 instead of &AtomicU64.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicU64;
extern "C" {
fn my_atomic_op(arg: *mut u64);
}
let atomic = AtomicU64::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<isize>
impl Atomic<isize>
1.0.0 (const: 1.24.0) · Sourcepub const fn new(v: isize) -> Atomic<isize>
pub const fn new(v: isize) -> Atomic<isize>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut isize) -> &'a Atomic<isize>
pub const unsafe fn from_ptr<'a>(ptr: *mut isize) -> &'a Atomic<isize>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicIsize};
// Get a pointer to an allocated value
let ptr: *mut isize = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicIsize>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicIsize::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicIsize>()(note that on some platforms this can be bigger thanalign_of::<isize>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.15.0 · Sourcepub fn get_mut(&mut self) -> &mut isize
pub fn get_mut(&mut self) -> &mut isize
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut isize) -> &mut Atomic<isize>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=ptr only.
pub fn from_mut(v: &mut isize) -> &mut Atomic<isize>
atomic_from_mut #76314)target_has_atomic_equal_alignment=ptr only.Get atomic access to a &mut isize.
Note: This function is only available on targets where AtomicIsize has the same alignment as isize.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<isize>]) -> &mut [isize]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<isize>]) -> &mut [isize]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicIsize] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicIsize, Ordering};
let mut some_ints = [const { AtomicIsize::new(0) }; 10];
let view: &mut [isize] = AtomicIsize::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [isize]) -> &mut [Atomic<isize>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=ptr only.
pub fn from_mut_slice(v: &mut [isize]) -> &mut [Atomic<isize>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=ptr only.Get atomic access to a &mut [isize] slice.
Note: This function is only available on targets where AtomicIsize has the same alignment as isize.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicIsize, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicIsize::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.15.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> isize
pub const fn into_inner(self) -> isize
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.0.0 · Sourcepub fn load(&self, order: Ordering) -> isize
pub fn load(&self, order: Ordering) -> isize
1.0.0 · Sourcepub fn store(&self, val: isize, order: Ordering)
pub fn store(&self, val: isize, order: Ordering)
1.0.0 · Sourcepub fn swap(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn swap(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.0.0 · Sourcepub fn compare_and_swap(
&self,
current: isize,
new: isize,
order: Ordering,
) -> isize
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=ptr only.
pub fn compare_and_swap( &self, current: isize, new: isize, order: Ordering, ) -> isize
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=ptr only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicIsize, Ordering};
let some_var = AtomicIsize::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.10.0 · Sourcepub fn compare_exchange(
&self,
current: isize,
new: isize,
success: Ordering,
failure: Ordering,
) -> Result<isize, isize>
Available on target_has_atomic=ptr only.
pub fn compare_exchange( &self, current: isize, new: isize, success: Ordering, failure: Ordering, ) -> Result<isize, isize>
target_has_atomic=ptr only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
use std::sync::atomic::{AtomicIsize, Ordering};
let some_var = AtomicIsize::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.10.0 · Sourcepub fn compare_exchange_weak(
&self,
current: isize,
new: isize,
success: Ordering,
failure: Ordering,
) -> Result<isize, isize>
Available on target_has_atomic=ptr only.
pub fn compare_exchange_weak( &self, current: isize, new: isize, success: Ordering, failure: Ordering, ) -> Result<isize, isize>
target_has_atomic=ptr only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicIsize::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
use std::sync::atomic::{AtomicIsize, Ordering};
let val = AtomicIsize::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.0.0 · Sourcepub fn fetch_add(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_add(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.0.0 · Sourcepub fn fetch_sub(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_sub(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.0.0 · Sourcepub fn fetch_and(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_and(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.27.0 · Sourcepub fn fetch_nand(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_nand(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.0.0 · Sourcepub fn fetch_or(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_or(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.0.0 · Sourcepub fn fetch_xor(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_xor(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<isize, isize>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=ptr only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<isize, isize>
try_update for consistencytarget_has_atomic=ptr only.An alias for
AtomicIsize::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(isize) -> Option<isize>,
) -> Result<isize, isize>
Available on target_has_atomic=ptr only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(isize) -> Option<isize>, ) -> Result<isize, isize>
target_has_atomic=ptr only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicIsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicIsize, Ordering};
let x = AtomicIsize::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(isize) -> isize,
) -> isize
Available on target_has_atomic=ptr only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(isize) -> isize, ) -> isize
target_has_atomic=ptr only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicIsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_max(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
use std::sync::atomic::{AtomicIsize, Ordering};
let foo = AtomicIsize::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: isize, order: Ordering) -> isize
Available on target_has_atomic=ptr only.
pub fn fetch_min(&self, val: isize, order: Ordering) -> isize
target_has_atomic=ptr only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
§Examples
use std::sync::atomic::{AtomicIsize, Ordering};
let foo = AtomicIsize::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut isize
pub const fn as_ptr(&self) -> *mut isize
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut isize instead of &AtomicIsize.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicIsize;
extern "C" {
fn my_atomic_op(arg: *mut isize);
}
let atomic = AtomicIsize::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Source§impl Atomic<usize>
impl Atomic<usize>
1.0.0 (const: 1.24.0) · Sourcepub const fn new(v: usize) -> Atomic<usize>
pub const fn new(v: usize) -> Atomic<usize>
Creates a new atomic integer.
§Examples
1.75.0 (const: 1.84.0) · Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a Atomic<usize>
pub const unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a Atomic<usize>
Creates a new reference to an atomic integer from a pointer.
§Examples
use std::sync::atomic::{self, AtomicUsize};
// Get a pointer to an allocated value
let ptr: *mut usize = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicUsize>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicUsize::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }§Safety
ptrmust be aligned toalign_of::<AtomicUsize>()(note that on some platforms this can be bigger thanalign_of::<usize>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.15.0 · Sourcepub fn get_mut(&mut self) -> &mut usize
pub fn get_mut(&mut self) -> &mut usize
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
Sourcepub fn from_mut(v: &mut usize) -> &mut Atomic<usize>
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=ptr only.
pub fn from_mut(v: &mut usize) -> &mut Atomic<usize>
atomic_from_mut #76314)target_has_atomic_equal_alignment=ptr only.Get atomic access to a &mut usize.
Note: This function is only available on targets where AtomicUsize has the same alignment as usize.
§Examples
Sourcepub fn get_mut_slice(this: &mut [Atomic<usize>]) -> &mut [usize]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Atomic<usize>]) -> &mut [usize]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicUsize] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicUsize, Ordering};
let mut some_ints = [const { AtomicUsize::new(0) }; 10];
let view: &mut [usize] = AtomicUsize::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [usize]) -> &mut [Atomic<usize>]
🔬This is a nightly-only experimental API. (atomic_from_mut #76314)Available on target_has_atomic_equal_alignment=ptr only.
pub fn from_mut_slice(v: &mut [usize]) -> &mut [Atomic<usize>]
atomic_from_mut #76314)target_has_atomic_equal_alignment=ptr only.Get atomic access to a &mut [usize] slice.
Note: This function is only available on targets where AtomicUsize has the same alignment as usize.
§Examples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicUsize, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicUsize::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.15.0 (const: 1.79.0) · Sourcepub const fn into_inner(self) -> usize
pub const fn into_inner(self) -> usize
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
§Examples
1.0.0 · Sourcepub fn load(&self, order: Ordering) -> usize
pub fn load(&self, order: Ordering) -> usize
1.0.0 · Sourcepub fn store(&self, val: usize, order: Ordering)
pub fn store(&self, val: usize, order: Ordering)
1.0.0 · Sourcepub fn swap(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn swap(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.0.0 · Sourcepub fn compare_and_swap(
&self,
current: usize,
new: usize,
order: Ordering,
) -> usize
👎Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak insteadAvailable on target_has_atomic=ptr only.
pub fn compare_and_swap( &self, current: usize, new: usize, order: Ordering, ) -> usize
compare_exchange or compare_exchange_weak insteadtarget_has_atomic=ptr only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Migrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
§Examples
use std::sync::atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.10.0 · Sourcepub fn compare_exchange(
&self,
current: usize,
new: usize,
success: Ordering,
failure: Ordering,
) -> Result<usize, usize>
Available on target_has_atomic=ptr only.
pub fn compare_exchange( &self, current: usize, new: usize, success: Ordering, failure: Ordering, ) -> Result<usize, usize>
target_has_atomic=ptr only.Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
use std::sync::atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.10.0 · Sourcepub fn compare_exchange_weak(
&self,
current: usize,
new: usize,
success: Ordering,
failure: Ordering,
) -> Result<usize, usize>
Available on target_has_atomic=ptr only.
pub fn compare_exchange_weak( &self, current: usize, new: usize, success: Ordering, failure: Ordering, ) -> Result<usize, usize>
target_has_atomic=ptr only.Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicUsize::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
use std::sync::atomic::{AtomicUsize, Ordering};
let val = AtomicUsize::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}§Considerations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.0.0 · Sourcepub fn fetch_add(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_add(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.0.0 · Sourcepub fn fetch_sub(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_sub(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.0.0 · Sourcepub fn fetch_and(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_and(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.27.0 · Sourcepub fn fetch_nand(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_nand(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.0.0 · Sourcepub fn fetch_or(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_or(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.0.0 · Sourcepub fn fetch_xor(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_xor(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
1.45.0 · Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<usize, usize>
👎Deprecating in 1.99.0: renamed to try_update for consistencyAvailable on target_has_atomic=ptr only.
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<usize, usize>
try_update for consistencytarget_has_atomic=ptr only.An alias for
AtomicUsize::try_update
.
1.96.0 · Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(usize) -> Option<usize>,
) -> Result<usize, usize>
Available on target_has_atomic=ptr only.
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(usize) -> Option<usize>, ) -> Result<usize, usize>
target_has_atomic=ptr only.Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicUsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
use std::sync::atomic::{AtomicUsize, Ordering};
let x = AtomicUsize::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 · Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(usize) -> usize,
) -> usize
Available on target_has_atomic=ptr only.
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(usize) -> usize, ) -> usize
target_has_atomic=ptr only.Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicUsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Considerations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
§Examples
1.45.0 · Sourcepub fn fetch_max(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_max(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
use std::sync::atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 · Sourcepub fn fetch_min(&self, val: usize, order: Ordering) -> usize
Available on target_has_atomic=ptr only.
pub fn fetch_min(&self, val: usize, order: Ordering) -> usize
target_has_atomic=ptr only.Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
§Examples
use std::sync::atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) · Sourcepub const fn as_ptr(&self) -> *mut usize
pub const fn as_ptr(&self) -> *mut usize
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut usize instead of &AtomicUsize.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
§Examples
use std::sync::atomic::AtomicUsize;
extern "C" {
fn my_atomic_op(arg: *mut usize);
}
let atomic = AtomicUsize::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Trait Implementations§
1.0.0 · Source§impl<T> Default for Atomic<*mut T>
Available on target_has_atomic_load_store=ptr only.
impl<T> Default for Atomic<*mut T>
target_has_atomic_load_store=ptr only.1.23.0 (const: unstable) · Source§impl<T> From<*mut T> for Atomic<*mut T>
Available on target_has_atomic_load_store=ptr only.
impl<T> From<*mut T> for Atomic<*mut T>
target_has_atomic_load_store=ptr only.1.24.0 (const: unstable) · Source§impl From<bool> for Atomic<bool>
Available on target_has_atomic_load_store=8 only.
impl From<bool> for Atomic<bool>
target_has_atomic_load_store=8 only.1.24.0 · Source§impl<T> Pointer for Atomic<*mut T>
Available on target_has_atomic_load_store=ptr only.
impl<T> Pointer for Atomic<*mut T>
target_has_atomic_load_store=ptr only.impl<T> RefUnwindSafe for Atomic<*mut T>
target_has_atomic_load_store=ptr only.impl RefUnwindSafe for Atomic<bool>
target_has_atomic_load_store=8 only.impl RefUnwindSafe for Atomic<i16>
target_has_atomic_load_store=16 only.impl RefUnwindSafe for Atomic<i32>
target_has_atomic_load_store=32 only.impl RefUnwindSafe for Atomic<i64>
target_has_atomic_load_store=64 only.impl RefUnwindSafe for Atomic<i8>
target_has_atomic_load_store=8 only.impl RefUnwindSafe for Atomic<isize>
target_has_atomic_load_store=ptr only.impl RefUnwindSafe for Atomic<u16>
target_has_atomic_load_store=16 only.impl RefUnwindSafe for Atomic<u32>
target_has_atomic_load_store=32 only.impl RefUnwindSafe for Atomic<u64>
target_has_atomic_load_store=64 only.impl RefUnwindSafe for Atomic<u8>
target_has_atomic_load_store=8 only.impl RefUnwindSafe for Atomic<usize>
target_has_atomic_load_store=ptr only.