pub struct Atomic<T: AtomicPrimitive> { /* private fields */ }generic_atomic #130539)Expand description
A memory location which can be safely modified from multiple threads.
This has the same size and bit validity as the underlying type T. However,
the alignment of this type is always equal to its size, even on targets where
T has alignment less than its size.
For more about the differences between atomic types and non-atomic types as well as information about the portability of this type, please see the module-level documentation.
Note: This type is only available on platforms that support atomic loads
and stores of T.
Implementationsยง
Sourceยงimpl Atomic<bool>
impl Atomic<bool>
1.0.0 (const: 1.24.0) ยท Sourcepub const fn new(v: bool) -> AtomicBool
pub const fn new(v: bool) -> AtomicBool
Creates a new AtomicBool.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool
pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool
Creates a new AtomicBool from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicBool};
// Get a pointer to an allocated value
let ptr: *mut bool = Box::into_raw(Box::new(false));
assert!(ptr.cast::<AtomicBool>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe { AtomicBool::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(true, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, true);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicBool>()(note that this is always true, sincealign_of::<AtomicBool>() == 1).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.15.0 ยท Sourcepub fn get_mut(&mut self) -> &mut bool
pub fn get_mut(&mut self) -> &mut bool
Sourcepub fn from_mut(v: &mut bool) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut bool) -> &mut Self
atomic_from_mut #76314)Gets atomic access to a &mut bool.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [bool]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool]
atomic_from_mut #76314)Gets non-atomic access to a &mut [AtomicBool] slice.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicBool, Ordering};
let mut some_bools = [const { AtomicBool::new(false) }; 10];
let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
assert_eq!(view, [false; 10]);
view[..5].copy_from_slice(&[true; 5]);
std::thread::scope(|s| {
for t in &some_bools[..5] {
s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
}
for f in &some_bools[5..] {
s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
}
});Sourcepub fn from_mut_slice(v: &mut [bool]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self]
atomic_from_mut #76314)Gets atomic access to a &mut [bool] slice.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicBool, Ordering};
let mut some_bools = [false; 10];
let a = &*AtomicBool::from_mut_slice(&mut some_bools);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(true, Ordering::Relaxed));
}
});
assert_eq!(some_bools, [true; 10]);1.15.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> bool
pub const fn into_inner(self) -> bool
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.0.0 ยท Sourcepub fn load(&self, order: Ordering) -> bool
pub fn load(&self, order: Ordering) -> bool
1.0.0 ยท Sourcepub fn store(&self, val: bool, order: Ordering)
pub fn store(&self, val: bool, order: Ordering)
1.0.0 ยท Sourcepub fn swap(&self, val: bool, order: Ordering) -> bool
pub fn swap(&self, val: bool, order: Ordering) -> bool
Stores a value into the bool, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
1.0.0 ยท Sourcepub fn compare_and_swap(
&self,
current: bool,
new: bool,
order: Ordering,
) -> bool
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap( &self, current: bool, new: bool, order: Ordering, ) -> bool
compare_exchange or compare_exchange_weak insteadStores a value into the bool if the current value is the same as the current value.
The return value is always the previous value. If it is equal to current, then the value
was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let some_bool = AtomicBool::new(true);
assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
assert_eq!(some_bool.load(Ordering::Relaxed), false);
assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
assert_eq!(some_bool.load(Ordering::Relaxed), false);1.10.0 ยท Sourcepub fn compare_exchange(
&self,
current: bool,
new: bool,
success: Ordering,
failure: Ordering,
) -> Result<bool, bool>
pub fn compare_exchange( &self, current: bool, new: bool, success: Ordering, failure: Ordering, ) -> Result<bool, bool>
Stores a value into the bool if the current value is the same as the current value.
The return value is a result indicating whether the new value was written and containing
the previous value. On success this value is guaranteed to be equal to current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let some_bool = AtomicBool::new(true);
assert_eq!(some_bool.compare_exchange(true,
false,
Ordering::Acquire,
Ordering::Relaxed),
Ok(true));
assert_eq!(some_bool.load(Ordering::Relaxed), false);
assert_eq!(some_bool.compare_exchange(true, true,
Ordering::SeqCst,
Ordering::Acquire),
Err(false));
assert_eq!(some_bool.load(Ordering::Relaxed), false);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. In this case, compare_exchange can lead to the
ABA problem.
1.10.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: bool,
new: bool,
success: Ordering,
failure: Ordering,
) -> Result<bool, bool>
pub fn compare_exchange_weak( &self, current: bool, new: bool, success: Ordering, failure: Ordering, ) -> Result<bool, bool>
Stores a value into the bool if the current value is the same as the current value.
Unlike AtomicBool::compare_exchange, this function is allowed to spuriously fail even when the
comparison succeeds, which can result in more efficient code on some platforms. The
return value is a result indicating whether the new value was written and containing the
previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let val = AtomicBool::new(false);
let new = true;
let mut old = val.load(Ordering::Relaxed);
loop {
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. In this case, compare_exchange can lead to the
ABA problem.
1.0.0 ยท Sourcepub fn fetch_and(&self, val: bool, order: Ordering) -> bool
pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
Logical โandโ with a boolean value.
Performs a logical โandโ operation on the current value and the argument val, and sets
the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), false);1.0.0 ยท Sourcepub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
Logical โnandโ with a boolean value.
Performs a logical โnandโ operation on the current value and the argument val, and sets
the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), true);1.0.0 ยท Sourcepub fn fetch_or(&self, val: bool, order: Ordering) -> bool
pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
Logical โorโ with a boolean value.
Performs a logical โorโ operation on the current value and the argument val, and sets the
new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_or(true, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), false);1.0.0 ยท Sourcepub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
Logical โxorโ with a boolean value.
Performs a logical โxorโ operation on the current value and the argument val, and sets
the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), true);
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), false);1.81.0 ยท Sourcepub fn fetch_not(&self, order: Ordering) -> bool
pub fn fetch_not(&self, order: Ordering) -> bool
Logical โnotโ with a boolean value.
Performs a logical โnotโ operation on the current value, and sets the new value to the result.
Returns the previous value.
fetch_not takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let foo = AtomicBool::new(true);
assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
assert_eq!(foo.load(Ordering::SeqCst), false);
let foo = AtomicBool::new(false);
assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
assert_eq!(foo.load(Ordering::SeqCst), true);1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut bool
pub const fn as_ptr(&self) -> *mut bool
Returns a mutable pointer to the underlying bool.
Doing non-atomic reads and writes on the resulting boolean can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut bool instead of &AtomicBool.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicBool;
extern "C" {
fn my_atomic_op(arg: *mut bool);
}
let mut atomic = AtomicBool::new(true);
unsafe {
my_atomic_op(atomic.as_ptr());
}1.53.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<bool, bool>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<bool, bool>
try_update for consistencyAn alias for AtomicBool::try_update.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(bool) -> Option<bool>,
) -> Result<bool, bool>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(bool) -> Option<bool>, ) -> Result<bool, bool>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function
returned Some(_), else Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been
changed from other threads in the meantime, as long as the function
returns Some(_), but the function will have been applied only once to
the stored value.
try_update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicBool::compare_exchange respectively.
Using Acquire as success ordering makes the store part of this
operation Relaxed, and using Release makes the final successful
load Relaxed. The (failed) load ordering can only be SeqCst,
Acquire or Relaxed.
Note: This method is only available on platforms that support atomic
operations on u8.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem.
ยงExamples
use std::sync::atomic::{AtomicBool, Ordering};
let x = AtomicBool::new(false);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
assert_eq!(x.load(Ordering::SeqCst), false);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(bool) -> bool,
) -> bool
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(bool) -> bool, ) -> bool
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicBool::compare_exchange respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on u8.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem.
ยงExamples
Sourceยงimpl<T> Atomic<*mut T>
impl<T> Atomic<*mut T>
1.0.0 (const: 1.24.0) ยท Sourcepub const fn new(p: *mut T) -> AtomicPtr<T>
pub const fn new(p: *mut T) -> AtomicPtr<T>
Creates a new AtomicPtr.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T>
pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T>
Creates a new AtomicPtr from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicPtr};
// Get a pointer to an allocated value
let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert!(!unsafe { *ptr }.is_null());
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicPtr<T>>()(note that on some platforms this can be bigger thanalign_of::<*mut T>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
Sourcepub const fn null() -> AtomicPtr<T>
๐ฌThis is a nightly-only experimental API. (atomic_ptr_null #150733)
pub const fn null() -> AtomicPtr<T>
atomic_ptr_null #150733)Creates a new AtomicPtr initialized with a null pointer.
ยงExamples
1.15.0 ยท Sourcepub fn get_mut(&mut self) -> &mut *mut T
pub fn get_mut(&mut self) -> &mut *mut T
Returns a mutable reference to the underlying pointer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut *mut T) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut *mut T) -> &mut Self
atomic_from_mut #76314)Gets atomic access to a pointer.
Note: This function is only available on targets where AtomicPtr<T> has the same alignment as *const T
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T]
atomic_from_mut #76314)Gets non-atomic access to a &mut [AtomicPtr] slice.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::ptr::null_mut;
use std::sync::atomic::{AtomicPtr, Ordering};
let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
assert_eq!(view, [null_mut::<String>(); 10]);
view
.iter_mut()
.enumerate()
.for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
std::thread::scope(|s| {
for ptr in &some_ptrs {
s.spawn(move || {
let ptr = ptr.load(Ordering::Relaxed);
assert!(!ptr.is_null());
let name = unsafe { Box::from_raw(ptr) };
println!("Hello, {name}!");
});
}
});Sourcepub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self]
atomic_from_mut #76314)Gets atomic access to a slice of pointers.
Note: This function is only available on targets where AtomicPtr<T> has the same alignment as *const T
ยงExamples
#![feature(atomic_from_mut)]
use std::ptr::null_mut;
use std::sync::atomic::{AtomicPtr, Ordering};
let mut some_ptrs = [null_mut::<String>(); 10];
let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || {
let name = Box::new(format!("thread{i}"));
a[i].store(Box::into_raw(name), Ordering::Relaxed);
});
}
});
for p in some_ptrs {
assert!(!p.is_null());
let name = unsafe { Box::from_raw(p) };
println!("Hello, {name}!");
}1.15.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> *mut T
pub const fn into_inner(self) -> *mut T
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.0.0 ยท Sourcepub fn load(&self, order: Ordering) -> *mut T
pub fn load(&self, order: Ordering) -> *mut T
1.0.0 ยท Sourcepub fn store(&self, ptr: *mut T, order: Ordering)
pub fn store(&self, ptr: *mut T, order: Ordering)
1.0.0 ยท Sourcepub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
Stores a value into the pointer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
ยงExamples
1.0.0 ยท Sourcepub fn compare_and_swap(
&self,
current: *mut T,
new: *mut T,
order: Ordering,
) -> *mut T
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap( &self, current: *mut T, new: *mut T, order: Ordering, ) -> *mut T
compare_exchange or compare_exchange_weak insteadStores a value into the pointer if the current value is the same as the current value.
The return value is always the previous value. If it is equal to current, then the value
was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
1.10.0 ยท Sourcepub fn compare_exchange(
&self,
current: *mut T,
new: *mut T,
success: Ordering,
failure: Ordering,
) -> Result<*mut T, *mut T>
pub fn compare_exchange( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>
Stores a value into the pointer if the current value is the same as the current value.
The return value is a result indicating whether the new value was written and containing
the previous value. On success this value is guaranteed to be equal to current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
ยงExamples
use std::sync::atomic::{AtomicPtr, Ordering};
let ptr = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let other_ptr = &mut 10;
let value = some_ptr.compare_exchange(ptr, other_ptr,
Ordering::SeqCst, Ordering::Relaxed);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.10.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: *mut T,
new: *mut T,
success: Ordering,
failure: Ordering,
) -> Result<*mut T, *mut T>
pub fn compare_exchange_weak( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>
Stores a value into the pointer if the current value is the same as the current value.
Unlike AtomicPtr::compare_exchange, this function is allowed to spuriously fail even when the
comparison succeeds, which can result in more efficient code on some platforms. The
return value is a result indicating whether the new value was written and containing the
previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
ยงExamples
use std::sync::atomic::{AtomicPtr, Ordering};
let some_ptr = AtomicPtr::new(&mut 5);
let new = &mut 10;
let mut old = some_ptr.load(Ordering::Relaxed);
loop {
match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.53.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<*mut T, *mut T>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<*mut T, *mut T>
try_update for consistencyAn alias for AtomicPtr::try_update.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(*mut T) -> Option<*mut T>,
) -> Result<*mut T, *mut T>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(*mut T) -> Option<*mut T>, ) -> Result<*mut T, *mut T>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function
returned Some(_), else Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been
changed from other threads in the meantime, as long as the function
returns Some(_), but the function will have been applied only once to
the stored value.
try_update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicPtr::compare_exchange respectively.
Using Acquire as success ordering makes the store part of this
operation Relaxed, and using Release makes the final successful
load Relaxed. The (failed) load ordering can only be SeqCst,
Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem, which is a particularly common pitfall for pointers!
ยงExamples
use std::sync::atomic::{AtomicPtr, Ordering};
let ptr: *mut _ = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let new: *mut _ = &mut 10;
assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
if x == ptr {
Some(new)
} else {
None
}
});
assert_eq!(result, Ok(ptr));
assert_eq!(some_ptr.load(Ordering::SeqCst), new);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(*mut T) -> *mut T,
) -> *mut T
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(*mut T) -> *mut T, ) -> *mut T
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of AtomicPtr::compare_exchange respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on pointers.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem, which is a particularly common pitfall for pointers!
ยงExamples
1.91.0 ยท Sourcepub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointerโs address by adding val (in units of T),
returning the previous pointer.
This is equivalent to using wrapping_add to atomically perform the
equivalent of ptr = ptr.wrapping_add(val);.
This method operates in units of T, which means that it cannot be used
to offset the pointer by an amount which is not a multiple of
size_of::<T>(). This can sometimes be inconvenient, as you may want to
work with a deliberately misaligned pointer. In such cases, you may use
the fetch_byte_add method instead.
fetch_ptr_add takes an Ordering argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire makes the store part of this operation
Relaxed, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
ยงExamples
1.91.0 ยท Sourcepub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointerโs address by subtracting val (in units of T),
returning the previous pointer.
This is equivalent to using wrapping_sub to atomically perform the
equivalent of ptr = ptr.wrapping_sub(val);.
This method operates in units of T, which means that it cannot be used
to offset the pointer by an amount which is not a multiple of
size_of::<T>(). This can sometimes be inconvenient, as you may want to
work with a deliberately misaligned pointer. In such cases, you may use
the fetch_byte_sub method instead.
fetch_ptr_sub takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
ยงExamples
1.91.0 ยท Sourcepub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointerโs address by adding val bytes, returning the
previous pointer.
This is equivalent to using wrapping_byte_add to atomically
perform ptr = ptr.wrapping_byte_add(val).
fetch_byte_add takes an Ordering argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire makes the store part of this operation
Relaxed, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
ยงExamples
1.91.0 ยท Sourcepub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointerโs address by subtracting val bytes, returning the
previous pointer.
This is equivalent to using wrapping_byte_sub to atomically
perform ptr = ptr.wrapping_byte_sub(val).
fetch_byte_sub takes an Ordering argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire makes the store part of this operation
Relaxed, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
ยงExamples
1.91.0 ยท Sourcepub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
Performs a bitwise โorโ operation on the address of the current pointer,
and the argument val, and stores a pointer with provenance of the
current pointer and the resulting address.
This is equivalent to using map_addr to atomically perform
ptr = ptr.map_addr(|a| a | val). This can be used in tagged
pointer schemes to atomically set tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr. For
example: a.fetch_or(val).map_addr(|a| a | val).
fetch_or takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr for
details.
ยงExamples
use core::sync::atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
let atom = AtomicPtr::<i64>::new(pointer);
// Tag the bottom bit of the pointer.
assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
// Extract and untag.
let tagged = atom.load(Ordering::Relaxed);
assert_eq!(tagged.addr() & 1, 1);
assert_eq!(tagged.map_addr(|p| p & !1), pointer);1.91.0 ยท Sourcepub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
Performs a bitwise โandโ operation on the address of the current
pointer, and the argument val, and stores a pointer with provenance of
the current pointer and the resulting address.
This is equivalent to using map_addr to atomically perform
ptr = ptr.map_addr(|a| a & val). This can be used in tagged
pointer schemes to atomically unset tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr. For
example: a.fetch_and(val).map_addr(|a| a & val).
fetch_and takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr for
details.
ยงExamples
use core::sync::atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
// A tagged pointer
let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
// Untag, and extract the previously tagged pointer.
let untagged = atom.fetch_and(!1, Ordering::Relaxed)
.map_addr(|a| a & !1);
assert_eq!(untagged, pointer);1.91.0 ยท Sourcepub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
Performs a bitwise โxorโ operation on the address of the current
pointer, and the argument val, and stores a pointer with provenance of
the current pointer and the resulting address.
This is equivalent to using map_addr to atomically perform
ptr = ptr.map_addr(|a| a ^ val). This can be used in tagged
pointer schemes to atomically toggle tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr. For
example: a.fetch_xor(val).map_addr(|a| a ^ val).
fetch_xor takes an Ordering argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire makes the store part of this operation Relaxed,
and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic
operations on AtomicPtr.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr for
details.
ยงExamples
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut *mut T
pub const fn as_ptr(&self) -> *mut *mut T
Returns a mutable pointer to the underlying pointer.
Doing non-atomic reads and writes on the resulting pointer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut *mut T instead of &AtomicPtr<T>.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicPtr;
extern "C" {
fn my_atomic_op(arg: *mut *mut u32);
}
let mut value = 17;
let atomic = AtomicPtr::new(&mut value);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<i8>
impl Atomic<i8>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: i8) -> Self
pub const fn new(v: i8) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i8) -> &'a AtomicI8
pub const unsafe fn from_ptr<'a>(ptr: *mut i8) -> &'a AtomicI8
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicI8};
// Get a pointer to an allocated value
let ptr: *mut i8 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI8>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI8::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicI8>()(note that this is always true, sincealign_of::<AtomicI8>() == 1).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut i8
pub fn get_mut(&mut self) -> &mut i8
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut i8) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut i8) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut i8.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [i8]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [i8]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI8] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI8, Ordering};
let mut some_ints = [const { AtomicI8::new(0) }; 10];
let view: &mut [i8] = AtomicI8::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i8]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [i8]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [i8] slice.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI8, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI8::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> i8
pub const fn into_inner(self) -> i8
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> i8
pub fn load(&self, order: Ordering) -> i8
1.34.0 ยท Sourcepub fn store(&self, val: i8, order: Ordering)
pub fn store(&self, val: i8, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: i8, order: Ordering) -> i8
pub fn swap(&self, val: i8, order: Ordering) -> i8
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: i8, new: i8, order: Ordering) -> i8
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: i8, new: i8, order: Ordering) -> i8
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicI8, Ordering};
let some_var = AtomicI8::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: i8,
new: i8,
success: Ordering,
failure: Ordering,
) -> Result<i8, i8>
pub fn compare_exchange( &self, current: i8, new: i8, success: Ordering, failure: Ordering, ) -> Result<i8, i8>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
use std::sync::atomic::{AtomicI8, Ordering};
let some_var = AtomicI8::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: i8,
new: i8,
success: Ordering,
failure: Ordering,
) -> Result<i8, i8>
pub fn compare_exchange_weak( &self, current: i8, new: i8, success: Ordering, failure: Ordering, ) -> Result<i8, i8>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI8::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
use std::sync::atomic::{AtomicI8, Ordering};
let val = AtomicI8::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: i8, order: Ordering) -> i8
pub fn fetch_add(&self, val: i8, order: Ordering) -> i8
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: i8, order: Ordering) -> i8
pub fn fetch_sub(&self, val: i8, order: Ordering) -> i8
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: i8, order: Ordering) -> i8
pub fn fetch_and(&self, val: i8, order: Ordering) -> i8
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: i8, order: Ordering) -> i8
pub fn fetch_nand(&self, val: i8, order: Ordering) -> i8
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: i8, order: Ordering) -> i8
pub fn fetch_or(&self, val: i8, order: Ordering) -> i8
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: i8, order: Ordering) -> i8
pub fn fetch_xor(&self, val: i8, order: Ordering) -> i8
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i8, i8>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i8, i8>
try_update for consistencyAn alias for
AtomicI8::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i8) -> Option<i8>,
) -> Result<i8, i8>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i8) -> Option<i8>, ) -> Result<i8, i8>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicI8, Ordering};
let x = AtomicI8::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i8) -> i8,
) -> i8
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i8) -> i8, ) -> i8
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: i8, order: Ordering) -> i8
pub fn fetch_max(&self, val: i8, order: Ordering) -> i8
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
use std::sync::atomic::{AtomicI8, Ordering};
let foo = AtomicI8::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: i8, order: Ordering) -> i8
pub fn fetch_min(&self, val: i8, order: Ordering) -> i8
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i8.
ยงExamples
use std::sync::atomic::{AtomicI8, Ordering};
let foo = AtomicI8::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut i8
pub const fn as_ptr(&self) -> *mut i8
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i8 instead of &AtomicI8.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicI8;
extern "C" {
fn my_atomic_op(arg: *mut i8);
}
let atomic = AtomicI8::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<u8>
impl Atomic<u8>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: u8) -> Self
pub const fn new(v: u8) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u8) -> &'a AtomicU8
pub const unsafe fn from_ptr<'a>(ptr: *mut u8) -> &'a AtomicU8
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicU8};
// Get a pointer to an allocated value
let ptr: *mut u8 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU8>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU8::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicU8>()(note that this is always true, sincealign_of::<AtomicU8>() == 1).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut u8
pub fn get_mut(&mut self) -> &mut u8
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut u8) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut u8) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut u8.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [u8]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [u8]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU8] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU8, Ordering};
let mut some_ints = [const { AtomicU8::new(0) }; 10];
let view: &mut [u8] = AtomicU8::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u8]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [u8]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [u8] slice.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU8, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU8::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> u8
pub const fn into_inner(self) -> u8
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> u8
pub fn load(&self, order: Ordering) -> u8
1.34.0 ยท Sourcepub fn store(&self, val: u8, order: Ordering)
pub fn store(&self, val: u8, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: u8, order: Ordering) -> u8
pub fn swap(&self, val: u8, order: Ordering) -> u8
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: u8, new: u8, order: Ordering) -> u8
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: u8, new: u8, order: Ordering) -> u8
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicU8, Ordering};
let some_var = AtomicU8::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: u8,
new: u8,
success: Ordering,
failure: Ordering,
) -> Result<u8, u8>
pub fn compare_exchange( &self, current: u8, new: u8, success: Ordering, failure: Ordering, ) -> Result<u8, u8>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
use std::sync::atomic::{AtomicU8, Ordering};
let some_var = AtomicU8::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: u8,
new: u8,
success: Ordering,
failure: Ordering,
) -> Result<u8, u8>
pub fn compare_exchange_weak( &self, current: u8, new: u8, success: Ordering, failure: Ordering, ) -> Result<u8, u8>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU8::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
use std::sync::atomic::{AtomicU8, Ordering};
let val = AtomicU8::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: u8, order: Ordering) -> u8
pub fn fetch_add(&self, val: u8, order: Ordering) -> u8
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: u8, order: Ordering) -> u8
pub fn fetch_sub(&self, val: u8, order: Ordering) -> u8
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: u8, order: Ordering) -> u8
pub fn fetch_and(&self, val: u8, order: Ordering) -> u8
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: u8, order: Ordering) -> u8
pub fn fetch_nand(&self, val: u8, order: Ordering) -> u8
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: u8, order: Ordering) -> u8
pub fn fetch_or(&self, val: u8, order: Ordering) -> u8
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: u8, order: Ordering) -> u8
pub fn fetch_xor(&self, val: u8, order: Ordering) -> u8
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u8, u8>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u8, u8>
try_update for consistencyAn alias for
AtomicU8::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u8) -> Option<u8>,
) -> Result<u8, u8>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u8) -> Option<u8>, ) -> Result<u8, u8>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicU8, Ordering};
let x = AtomicU8::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u8) -> u8,
) -> u8
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u8) -> u8, ) -> u8
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU8::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: u8, order: Ordering) -> u8
pub fn fetch_max(&self, val: u8, order: Ordering) -> u8
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
use std::sync::atomic::{AtomicU8, Ordering};
let foo = AtomicU8::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: u8, order: Ordering) -> u8
pub fn fetch_min(&self, val: u8, order: Ordering) -> u8
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u8.
ยงExamples
use std::sync::atomic::{AtomicU8, Ordering};
let foo = AtomicU8::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut u8
pub const fn as_ptr(&self) -> *mut u8
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u8 instead of &AtomicU8.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicU8;
extern "C" {
fn my_atomic_op(arg: *mut u8);
}
let atomic = AtomicU8::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<i16>
impl Atomic<i16>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: i16) -> Self
pub const fn new(v: i16) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i16) -> &'a AtomicI16
pub const unsafe fn from_ptr<'a>(ptr: *mut i16) -> &'a AtomicI16
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicI16};
// Get a pointer to an allocated value
let ptr: *mut i16 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI16>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI16::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicI16>()(note that on some platforms this can be bigger thanalign_of::<i16>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut i16
pub fn get_mut(&mut self) -> &mut i16
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut i16) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut i16) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut i16.
Note: This function is only available on targets where AtomicI16 has the same alignment as i16.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [i16]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [i16]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI16] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI16, Ordering};
let mut some_ints = [const { AtomicI16::new(0) }; 10];
let view: &mut [i16] = AtomicI16::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i16]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [i16]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [i16] slice.
Note: This function is only available on targets where AtomicI16 has the same alignment as i16.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI16, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI16::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> i16
pub const fn into_inner(self) -> i16
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> i16
pub fn load(&self, order: Ordering) -> i16
1.34.0 ยท Sourcepub fn store(&self, val: i16, order: Ordering)
pub fn store(&self, val: i16, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: i16, order: Ordering) -> i16
pub fn swap(&self, val: i16, order: Ordering) -> i16
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: i16, new: i16, order: Ordering) -> i16
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: i16, new: i16, order: Ordering) -> i16
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicI16, Ordering};
let some_var = AtomicI16::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: i16,
new: i16,
success: Ordering,
failure: Ordering,
) -> Result<i16, i16>
pub fn compare_exchange( &self, current: i16, new: i16, success: Ordering, failure: Ordering, ) -> Result<i16, i16>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
use std::sync::atomic::{AtomicI16, Ordering};
let some_var = AtomicI16::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: i16,
new: i16,
success: Ordering,
failure: Ordering,
) -> Result<i16, i16>
pub fn compare_exchange_weak( &self, current: i16, new: i16, success: Ordering, failure: Ordering, ) -> Result<i16, i16>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI16::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
use std::sync::atomic::{AtomicI16, Ordering};
let val = AtomicI16::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: i16, order: Ordering) -> i16
pub fn fetch_add(&self, val: i16, order: Ordering) -> i16
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: i16, order: Ordering) -> i16
pub fn fetch_sub(&self, val: i16, order: Ordering) -> i16
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: i16, order: Ordering) -> i16
pub fn fetch_and(&self, val: i16, order: Ordering) -> i16
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: i16, order: Ordering) -> i16
pub fn fetch_nand(&self, val: i16, order: Ordering) -> i16
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: i16, order: Ordering) -> i16
pub fn fetch_or(&self, val: i16, order: Ordering) -> i16
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: i16, order: Ordering) -> i16
pub fn fetch_xor(&self, val: i16, order: Ordering) -> i16
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i16, i16>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i16, i16>
try_update for consistencyAn alias for
AtomicI16::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i16) -> Option<i16>,
) -> Result<i16, i16>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i16) -> Option<i16>, ) -> Result<i16, i16>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicI16, Ordering};
let x = AtomicI16::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i16) -> i16,
) -> i16
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i16) -> i16, ) -> i16
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: i16, order: Ordering) -> i16
pub fn fetch_max(&self, val: i16, order: Ordering) -> i16
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
use std::sync::atomic::{AtomicI16, Ordering};
let foo = AtomicI16::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: i16, order: Ordering) -> i16
pub fn fetch_min(&self, val: i16, order: Ordering) -> i16
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i16.
ยงExamples
use std::sync::atomic::{AtomicI16, Ordering};
let foo = AtomicI16::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut i16
pub const fn as_ptr(&self) -> *mut i16
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i16 instead of &AtomicI16.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicI16;
extern "C" {
fn my_atomic_op(arg: *mut i16);
}
let atomic = AtomicI16::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<u16>
impl Atomic<u16>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: u16) -> Self
pub const fn new(v: u16) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u16) -> &'a AtomicU16
pub const unsafe fn from_ptr<'a>(ptr: *mut u16) -> &'a AtomicU16
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicU16};
// Get a pointer to an allocated value
let ptr: *mut u16 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU16>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU16::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicU16>()(note that on some platforms this can be bigger thanalign_of::<u16>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut u16
pub fn get_mut(&mut self) -> &mut u16
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut u16) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut u16) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut u16.
Note: This function is only available on targets where AtomicU16 has the same alignment as u16.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [u16]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [u16]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU16] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU16, Ordering};
let mut some_ints = [const { AtomicU16::new(0) }; 10];
let view: &mut [u16] = AtomicU16::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u16]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [u16]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [u16] slice.
Note: This function is only available on targets where AtomicU16 has the same alignment as u16.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU16, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU16::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> u16
pub const fn into_inner(self) -> u16
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> u16
pub fn load(&self, order: Ordering) -> u16
1.34.0 ยท Sourcepub fn store(&self, val: u16, order: Ordering)
pub fn store(&self, val: u16, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: u16, order: Ordering) -> u16
pub fn swap(&self, val: u16, order: Ordering) -> u16
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: u16, new: u16, order: Ordering) -> u16
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: u16, new: u16, order: Ordering) -> u16
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicU16, Ordering};
let some_var = AtomicU16::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: u16,
new: u16,
success: Ordering,
failure: Ordering,
) -> Result<u16, u16>
pub fn compare_exchange( &self, current: u16, new: u16, success: Ordering, failure: Ordering, ) -> Result<u16, u16>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
use std::sync::atomic::{AtomicU16, Ordering};
let some_var = AtomicU16::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: u16,
new: u16,
success: Ordering,
failure: Ordering,
) -> Result<u16, u16>
pub fn compare_exchange_weak( &self, current: u16, new: u16, success: Ordering, failure: Ordering, ) -> Result<u16, u16>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU16::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
use std::sync::atomic::{AtomicU16, Ordering};
let val = AtomicU16::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: u16, order: Ordering) -> u16
pub fn fetch_add(&self, val: u16, order: Ordering) -> u16
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: u16, order: Ordering) -> u16
pub fn fetch_sub(&self, val: u16, order: Ordering) -> u16
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: u16, order: Ordering) -> u16
pub fn fetch_and(&self, val: u16, order: Ordering) -> u16
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: u16, order: Ordering) -> u16
pub fn fetch_nand(&self, val: u16, order: Ordering) -> u16
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: u16, order: Ordering) -> u16
pub fn fetch_or(&self, val: u16, order: Ordering) -> u16
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: u16, order: Ordering) -> u16
pub fn fetch_xor(&self, val: u16, order: Ordering) -> u16
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u16, u16>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u16, u16>
try_update for consistencyAn alias for
AtomicU16::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u16) -> Option<u16>,
) -> Result<u16, u16>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u16) -> Option<u16>, ) -> Result<u16, u16>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicU16, Ordering};
let x = AtomicU16::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u16) -> u16,
) -> u16
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u16) -> u16, ) -> u16
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU16::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: u16, order: Ordering) -> u16
pub fn fetch_max(&self, val: u16, order: Ordering) -> u16
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
use std::sync::atomic::{AtomicU16, Ordering};
let foo = AtomicU16::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: u16, order: Ordering) -> u16
pub fn fetch_min(&self, val: u16, order: Ordering) -> u16
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u16.
ยงExamples
use std::sync::atomic::{AtomicU16, Ordering};
let foo = AtomicU16::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut u16
pub const fn as_ptr(&self) -> *mut u16
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u16 instead of &AtomicU16.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicU16;
extern "C" {
fn my_atomic_op(arg: *mut u16);
}
let atomic = AtomicU16::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<i32>
impl Atomic<i32>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: i32) -> Self
pub const fn new(v: i32) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i32) -> &'a AtomicI32
pub const unsafe fn from_ptr<'a>(ptr: *mut i32) -> &'a AtomicI32
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicI32};
// Get a pointer to an allocated value
let ptr: *mut i32 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI32>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI32::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicI32>()(note that on some platforms this can be bigger thanalign_of::<i32>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut i32
pub fn get_mut(&mut self) -> &mut i32
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut i32) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut i32) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut i32.
Note: This function is only available on targets where AtomicI32 has the same alignment as i32.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [i32]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [i32]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI32] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI32, Ordering};
let mut some_ints = [const { AtomicI32::new(0) }; 10];
let view: &mut [i32] = AtomicI32::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i32]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [i32]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [i32] slice.
Note: This function is only available on targets where AtomicI32 has the same alignment as i32.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI32, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI32::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> i32
pub const fn into_inner(self) -> i32
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> i32
pub fn load(&self, order: Ordering) -> i32
1.34.0 ยท Sourcepub fn store(&self, val: i32, order: Ordering)
pub fn store(&self, val: i32, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: i32, order: Ordering) -> i32
pub fn swap(&self, val: i32, order: Ordering) -> i32
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: i32, new: i32, order: Ordering) -> i32
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: i32, new: i32, order: Ordering) -> i32
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicI32, Ordering};
let some_var = AtomicI32::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: i32,
new: i32,
success: Ordering,
failure: Ordering,
) -> Result<i32, i32>
pub fn compare_exchange( &self, current: i32, new: i32, success: Ordering, failure: Ordering, ) -> Result<i32, i32>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
use std::sync::atomic::{AtomicI32, Ordering};
let some_var = AtomicI32::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: i32,
new: i32,
success: Ordering,
failure: Ordering,
) -> Result<i32, i32>
pub fn compare_exchange_weak( &self, current: i32, new: i32, success: Ordering, failure: Ordering, ) -> Result<i32, i32>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI32::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
use std::sync::atomic::{AtomicI32, Ordering};
let val = AtomicI32::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: i32, order: Ordering) -> i32
pub fn fetch_add(&self, val: i32, order: Ordering) -> i32
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: i32, order: Ordering) -> i32
pub fn fetch_sub(&self, val: i32, order: Ordering) -> i32
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: i32, order: Ordering) -> i32
pub fn fetch_and(&self, val: i32, order: Ordering) -> i32
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: i32, order: Ordering) -> i32
pub fn fetch_nand(&self, val: i32, order: Ordering) -> i32
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: i32, order: Ordering) -> i32
pub fn fetch_or(&self, val: i32, order: Ordering) -> i32
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: i32, order: Ordering) -> i32
pub fn fetch_xor(&self, val: i32, order: Ordering) -> i32
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i32, i32>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i32, i32>
try_update for consistencyAn alias for
AtomicI32::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i32) -> Option<i32>,
) -> Result<i32, i32>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i32) -> Option<i32>, ) -> Result<i32, i32>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicI32, Ordering};
let x = AtomicI32::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i32) -> i32,
) -> i32
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i32) -> i32, ) -> i32
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: i32, order: Ordering) -> i32
pub fn fetch_max(&self, val: i32, order: Ordering) -> i32
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
use std::sync::atomic::{AtomicI32, Ordering};
let foo = AtomicI32::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: i32, order: Ordering) -> i32
pub fn fetch_min(&self, val: i32, order: Ordering) -> i32
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i32.
ยงExamples
use std::sync::atomic::{AtomicI32, Ordering};
let foo = AtomicI32::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut i32
pub const fn as_ptr(&self) -> *mut i32
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i32 instead of &AtomicI32.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicI32;
extern "C" {
fn my_atomic_op(arg: *mut i32);
}
let atomic = AtomicI32::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<u32>
impl Atomic<u32>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: u32) -> Self
pub const fn new(v: u32) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u32) -> &'a AtomicU32
pub const unsafe fn from_ptr<'a>(ptr: *mut u32) -> &'a AtomicU32
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicU32};
// Get a pointer to an allocated value
let ptr: *mut u32 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU32>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU32::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicU32>()(note that on some platforms this can be bigger thanalign_of::<u32>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut u32
pub fn get_mut(&mut self) -> &mut u32
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut u32) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut u32) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut u32.
Note: This function is only available on targets where AtomicU32 has the same alignment as u32.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [u32]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [u32]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU32] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU32, Ordering};
let mut some_ints = [const { AtomicU32::new(0) }; 10];
let view: &mut [u32] = AtomicU32::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u32]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [u32]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [u32] slice.
Note: This function is only available on targets where AtomicU32 has the same alignment as u32.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU32, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU32::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> u32
pub const fn into_inner(self) -> u32
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> u32
pub fn load(&self, order: Ordering) -> u32
1.34.0 ยท Sourcepub fn store(&self, val: u32, order: Ordering)
pub fn store(&self, val: u32, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: u32, order: Ordering) -> u32
pub fn swap(&self, val: u32, order: Ordering) -> u32
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: u32, new: u32, order: Ordering) -> u32
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: u32, new: u32, order: Ordering) -> u32
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicU32, Ordering};
let some_var = AtomicU32::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: u32,
new: u32,
success: Ordering,
failure: Ordering,
) -> Result<u32, u32>
pub fn compare_exchange( &self, current: u32, new: u32, success: Ordering, failure: Ordering, ) -> Result<u32, u32>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
use std::sync::atomic::{AtomicU32, Ordering};
let some_var = AtomicU32::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: u32,
new: u32,
success: Ordering,
failure: Ordering,
) -> Result<u32, u32>
pub fn compare_exchange_weak( &self, current: u32, new: u32, success: Ordering, failure: Ordering, ) -> Result<u32, u32>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU32::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
use std::sync::atomic::{AtomicU32, Ordering};
let val = AtomicU32::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: u32, order: Ordering) -> u32
pub fn fetch_add(&self, val: u32, order: Ordering) -> u32
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: u32, order: Ordering) -> u32
pub fn fetch_sub(&self, val: u32, order: Ordering) -> u32
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: u32, order: Ordering) -> u32
pub fn fetch_and(&self, val: u32, order: Ordering) -> u32
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: u32, order: Ordering) -> u32
pub fn fetch_nand(&self, val: u32, order: Ordering) -> u32
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: u32, order: Ordering) -> u32
pub fn fetch_or(&self, val: u32, order: Ordering) -> u32
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: u32, order: Ordering) -> u32
pub fn fetch_xor(&self, val: u32, order: Ordering) -> u32
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u32, u32>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u32, u32>
try_update for consistencyAn alias for
AtomicU32::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u32) -> Option<u32>,
) -> Result<u32, u32>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u32) -> Option<u32>, ) -> Result<u32, u32>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicU32, Ordering};
let x = AtomicU32::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u32) -> u32,
) -> u32
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u32) -> u32, ) -> u32
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU32::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: u32, order: Ordering) -> u32
pub fn fetch_max(&self, val: u32, order: Ordering) -> u32
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
use std::sync::atomic::{AtomicU32, Ordering};
let foo = AtomicU32::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: u32, order: Ordering) -> u32
pub fn fetch_min(&self, val: u32, order: Ordering) -> u32
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u32.
ยงExamples
use std::sync::atomic::{AtomicU32, Ordering};
let foo = AtomicU32::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut u32
pub const fn as_ptr(&self) -> *mut u32
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u32 instead of &AtomicU32.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicU32;
extern "C" {
fn my_atomic_op(arg: *mut u32);
}
let atomic = AtomicU32::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<i64>
impl Atomic<i64>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: i64) -> Self
pub const fn new(v: i64) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut i64) -> &'a AtomicI64
pub const unsafe fn from_ptr<'a>(ptr: *mut i64) -> &'a AtomicI64
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicI64};
// Get a pointer to an allocated value
let ptr: *mut i64 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicI64>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicI64::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicI64>()(note that on some platforms this can be bigger thanalign_of::<i64>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut i64
pub fn get_mut(&mut self) -> &mut i64
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut i64) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut i64) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut i64.
Note: This function is only available on targets where AtomicI64 has the same alignment as i64.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [i64]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [i64]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicI64] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI64, Ordering};
let mut some_ints = [const { AtomicI64::new(0) }; 10];
let view: &mut [i64] = AtomicI64::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [i64]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [i64]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [i64] slice.
Note: This function is only available on targets where AtomicI64 has the same alignment as i64.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicI64, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicI64::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> i64
pub const fn into_inner(self) -> i64
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> i64
pub fn load(&self, order: Ordering) -> i64
1.34.0 ยท Sourcepub fn store(&self, val: i64, order: Ordering)
pub fn store(&self, val: i64, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: i64, order: Ordering) -> i64
pub fn swap(&self, val: i64, order: Ordering) -> i64
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: i64, new: i64, order: Ordering) -> i64
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: i64, new: i64, order: Ordering) -> i64
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicI64, Ordering};
let some_var = AtomicI64::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: i64,
new: i64,
success: Ordering,
failure: Ordering,
) -> Result<i64, i64>
pub fn compare_exchange( &self, current: i64, new: i64, success: Ordering, failure: Ordering, ) -> Result<i64, i64>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
use std::sync::atomic::{AtomicI64, Ordering};
let some_var = AtomicI64::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: i64,
new: i64,
success: Ordering,
failure: Ordering,
) -> Result<i64, i64>
pub fn compare_exchange_weak( &self, current: i64, new: i64, success: Ordering, failure: Ordering, ) -> Result<i64, i64>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicI64::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
use std::sync::atomic::{AtomicI64, Ordering};
let val = AtomicI64::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: i64, order: Ordering) -> i64
pub fn fetch_add(&self, val: i64, order: Ordering) -> i64
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: i64, order: Ordering) -> i64
pub fn fetch_sub(&self, val: i64, order: Ordering) -> i64
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: i64, order: Ordering) -> i64
pub fn fetch_and(&self, val: i64, order: Ordering) -> i64
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: i64, order: Ordering) -> i64
pub fn fetch_nand(&self, val: i64, order: Ordering) -> i64
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: i64, order: Ordering) -> i64
pub fn fetch_or(&self, val: i64, order: Ordering) -> i64
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: i64, order: Ordering) -> i64
pub fn fetch_xor(&self, val: i64, order: Ordering) -> i64
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<i64, i64>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<i64, i64>
try_update for consistencyAn alias for
AtomicI64::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i64) -> Option<i64>,
) -> Result<i64, i64>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i64) -> Option<i64>, ) -> Result<i64, i64>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicI64, Ordering};
let x = AtomicI64::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(i64) -> i64,
) -> i64
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(i64) -> i64, ) -> i64
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicI64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: i64, order: Ordering) -> i64
pub fn fetch_max(&self, val: i64, order: Ordering) -> i64
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
use std::sync::atomic::{AtomicI64, Ordering};
let foo = AtomicI64::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: i64, order: Ordering) -> i64
pub fn fetch_min(&self, val: i64, order: Ordering) -> i64
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
i64.
ยงExamples
use std::sync::atomic::{AtomicI64, Ordering};
let foo = AtomicI64::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut i64
pub const fn as_ptr(&self) -> *mut i64
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut i64 instead of &AtomicI64.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicI64;
extern "C" {
fn my_atomic_op(arg: *mut i64);
}
let atomic = AtomicI64::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<u64>
impl Atomic<u64>
1.34.0 (const: 1.34.0) ยท Sourcepub const fn new(v: u64) -> Self
pub const fn new(v: u64) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut u64) -> &'a AtomicU64
pub const unsafe fn from_ptr<'a>(ptr: *mut u64) -> &'a AtomicU64
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicU64};
// Get a pointer to an allocated value
let ptr: *mut u64 = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicU64>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicU64::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicU64>()(note that on some platforms this can be bigger thanalign_of::<u64>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.34.0 ยท Sourcepub fn get_mut(&mut self) -> &mut u64
pub fn get_mut(&mut self) -> &mut u64
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut u64) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut u64) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut u64.
Note: This function is only available on targets where AtomicU64 has the same alignment as u64.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [u64]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [u64]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicU64] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU64, Ordering};
let mut some_ints = [const { AtomicU64::new(0) }; 10];
let view: &mut [u64] = AtomicU64::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [u64]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [u64]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [u64] slice.
Note: This function is only available on targets where AtomicU64 has the same alignment as u64.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicU64, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicU64::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.34.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> u64
pub const fn into_inner(self) -> u64
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.34.0 ยท Sourcepub fn load(&self, order: Ordering) -> u64
pub fn load(&self, order: Ordering) -> u64
1.34.0 ยท Sourcepub fn store(&self, val: u64, order: Ordering)
pub fn store(&self, val: u64, order: Ordering)
1.34.0 ยท Sourcepub fn swap(&self, val: u64, order: Ordering) -> u64
pub fn swap(&self, val: u64, order: Ordering) -> u64
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.34.0 ยท Sourcepub fn compare_and_swap(&self, current: u64, new: u64, order: Ordering) -> u64
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap(&self, current: u64, new: u64, order: Ordering) -> u64
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicU64, Ordering};
let some_var = AtomicU64::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.34.0 ยท Sourcepub fn compare_exchange(
&self,
current: u64,
new: u64,
success: Ordering,
failure: Ordering,
) -> Result<u64, u64>
pub fn compare_exchange( &self, current: u64, new: u64, success: Ordering, failure: Ordering, ) -> Result<u64, u64>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
use std::sync::atomic::{AtomicU64, Ordering};
let some_var = AtomicU64::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: u64,
new: u64,
success: Ordering,
failure: Ordering,
) -> Result<u64, u64>
pub fn compare_exchange_weak( &self, current: u64, new: u64, success: Ordering, failure: Ordering, ) -> Result<u64, u64>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicU64::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
use std::sync::atomic::{AtomicU64, Ordering};
let val = AtomicU64::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.34.0 ยท Sourcepub fn fetch_add(&self, val: u64, order: Ordering) -> u64
pub fn fetch_add(&self, val: u64, order: Ordering) -> u64
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_sub(&self, val: u64, order: Ordering) -> u64
pub fn fetch_sub(&self, val: u64, order: Ordering) -> u64
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_and(&self, val: u64, order: Ordering) -> u64
pub fn fetch_and(&self, val: u64, order: Ordering) -> u64
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_nand(&self, val: u64, order: Ordering) -> u64
pub fn fetch_nand(&self, val: u64, order: Ordering) -> u64
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_or(&self, val: u64, order: Ordering) -> u64
pub fn fetch_or(&self, val: u64, order: Ordering) -> u64
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.34.0 ยท Sourcepub fn fetch_xor(&self, val: u64, order: Ordering) -> u64
pub fn fetch_xor(&self, val: u64, order: Ordering) -> u64
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<u64, u64>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<u64, u64>
try_update for consistencyAn alias for
AtomicU64::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u64) -> Option<u64>,
) -> Result<u64, u64>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u64) -> Option<u64>, ) -> Result<u64, u64>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicU64, Ordering};
let x = AtomicU64::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(u64) -> u64,
) -> u64
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(u64) -> u64, ) -> u64
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicU64::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: u64, order: Ordering) -> u64
pub fn fetch_max(&self, val: u64, order: Ordering) -> u64
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
use std::sync::atomic::{AtomicU64, Ordering};
let foo = AtomicU64::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: u64, order: Ordering) -> u64
pub fn fetch_min(&self, val: u64, order: Ordering) -> u64
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
u64.
ยงExamples
use std::sync::atomic::{AtomicU64, Ordering};
let foo = AtomicU64::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut u64
pub const fn as_ptr(&self) -> *mut u64
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut u64 instead of &AtomicU64.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicU64;
extern "C" {
fn my_atomic_op(arg: *mut u64);
}
let atomic = AtomicU64::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<isize>
impl Atomic<isize>
1.0.0 (const: 1.24.0) ยท Sourcepub const fn new(v: isize) -> Self
pub const fn new(v: isize) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut isize) -> &'a AtomicIsize
pub const unsafe fn from_ptr<'a>(ptr: *mut isize) -> &'a AtomicIsize
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicIsize};
// Get a pointer to an allocated value
let ptr: *mut isize = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicIsize>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicIsize::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicIsize>()(note that on some platforms this can be bigger thanalign_of::<isize>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.15.0 ยท Sourcepub fn get_mut(&mut self) -> &mut isize
pub fn get_mut(&mut self) -> &mut isize
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut isize) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut isize) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut isize.
Note: This function is only available on targets where AtomicIsize has the same alignment as isize.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [isize]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [isize]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicIsize] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicIsize, Ordering};
let mut some_ints = [const { AtomicIsize::new(0) }; 10];
let view: &mut [isize] = AtomicIsize::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [isize]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [isize]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [isize] slice.
Note: This function is only available on targets where AtomicIsize has the same alignment as isize.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicIsize, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicIsize::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.15.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> isize
pub const fn into_inner(self) -> isize
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.0.0 ยท Sourcepub fn load(&self, order: Ordering) -> isize
pub fn load(&self, order: Ordering) -> isize
1.0.0 ยท Sourcepub fn store(&self, val: isize, order: Ordering)
pub fn store(&self, val: isize, order: Ordering)
1.0.0 ยท Sourcepub fn swap(&self, val: isize, order: Ordering) -> isize
pub fn swap(&self, val: isize, order: Ordering) -> isize
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.0.0 ยท Sourcepub fn compare_and_swap(
&self,
current: isize,
new: isize,
order: Ordering,
) -> isize
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap( &self, current: isize, new: isize, order: Ordering, ) -> isize
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicIsize, Ordering};
let some_var = AtomicIsize::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.10.0 ยท Sourcepub fn compare_exchange(
&self,
current: isize,
new: isize,
success: Ordering,
failure: Ordering,
) -> Result<isize, isize>
pub fn compare_exchange( &self, current: isize, new: isize, success: Ordering, failure: Ordering, ) -> Result<isize, isize>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
use std::sync::atomic::{AtomicIsize, Ordering};
let some_var = AtomicIsize::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.10.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: isize,
new: isize,
success: Ordering,
failure: Ordering,
) -> Result<isize, isize>
pub fn compare_exchange_weak( &self, current: isize, new: isize, success: Ordering, failure: Ordering, ) -> Result<isize, isize>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicIsize::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
use std::sync::atomic::{AtomicIsize, Ordering};
let val = AtomicIsize::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.0.0 ยท Sourcepub fn fetch_add(&self, val: isize, order: Ordering) -> isize
pub fn fetch_add(&self, val: isize, order: Ordering) -> isize
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_sub(&self, val: isize, order: Ordering) -> isize
pub fn fetch_sub(&self, val: isize, order: Ordering) -> isize
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_and(&self, val: isize, order: Ordering) -> isize
pub fn fetch_and(&self, val: isize, order: Ordering) -> isize
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.27.0 ยท Sourcepub fn fetch_nand(&self, val: isize, order: Ordering) -> isize
pub fn fetch_nand(&self, val: isize, order: Ordering) -> isize
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_or(&self, val: isize, order: Ordering) -> isize
pub fn fetch_or(&self, val: isize, order: Ordering) -> isize
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_xor(&self, val: isize, order: Ordering) -> isize
pub fn fetch_xor(&self, val: isize, order: Ordering) -> isize
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<isize, isize>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<isize, isize>
try_update for consistencyAn alias for
AtomicIsize::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(isize) -> Option<isize>,
) -> Result<isize, isize>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(isize) -> Option<isize>, ) -> Result<isize, isize>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicIsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicIsize, Ordering};
let x = AtomicIsize::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(isize) -> isize,
) -> isize
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(isize) -> isize, ) -> isize
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicIsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: isize, order: Ordering) -> isize
pub fn fetch_max(&self, val: isize, order: Ordering) -> isize
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
use std::sync::atomic::{AtomicIsize, Ordering};
let foo = AtomicIsize::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: isize, order: Ordering) -> isize
pub fn fetch_min(&self, val: isize, order: Ordering) -> isize
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
isize.
ยงExamples
use std::sync::atomic::{AtomicIsize, Ordering};
let foo = AtomicIsize::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut isize
pub const fn as_ptr(&self) -> *mut isize
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut isize instead of &AtomicIsize.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicIsize;
extern "C" {
fn my_atomic_op(arg: *mut isize);
}
let atomic = AtomicIsize::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}Sourceยงimpl Atomic<usize>
impl Atomic<usize>
1.0.0 (const: 1.24.0) ยท Sourcepub const fn new(v: usize) -> Self
pub const fn new(v: usize) -> Self
Creates a new atomic integer.
ยงExamples
1.75.0 (const: 1.84.0) ยท Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a AtomicUsize
pub const unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a AtomicUsize
Creates a new reference to an atomic integer from a pointer.
ยงExamples
use std::sync::atomic::{self, AtomicUsize};
// Get a pointer to an allocated value
let ptr: *mut usize = Box::into_raw(Box::new(0));
assert!(ptr.cast::<AtomicUsize>().is_aligned());
{
// Create an atomic view of the allocated value
let atomic = unsafe {AtomicUsize::from_ptr(ptr) };
// Use `atomic` for atomic operations, possibly share it with other threads
atomic.store(1, atomic::Ordering::Relaxed);
}
// It's ok to non-atomically access the value behind `ptr`,
// since the reference to the atomic ended its lifetime in the block above
assert_eq!(unsafe { *ptr }, 1);
// Deallocate the value
unsafe { drop(Box::from_raw(ptr)) }ยงSafety
ptrmust be aligned toalign_of::<AtomicUsize>()(note that on some platforms this can be bigger thanalign_of::<usize>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- You must adhere to the Memory model for atomic accesses. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization.
1.15.0 ยท Sourcepub fn get_mut(&mut self) -> &mut usize
pub fn get_mut(&mut self) -> &mut usize
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
Sourcepub fn from_mut(v: &mut usize) -> &mut Self
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut(v: &mut usize) -> &mut Self
atomic_from_mut #76314)Get atomic access to a &mut usize.
Note: This function is only available on targets where AtomicUsize has the same alignment as usize.
ยงExamples
Sourcepub fn get_mut_slice(this: &mut [Self]) -> &mut [usize]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn get_mut_slice(this: &mut [Self]) -> &mut [usize]
atomic_from_mut #76314)Get non-atomic access to a &mut [AtomicUsize] slice
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicUsize, Ordering};
let mut some_ints = [const { AtomicUsize::new(0) }; 10];
let view: &mut [usize] = AtomicUsize::get_mut_slice(&mut some_ints);
assert_eq!(view, [0; 10]);
view
.iter_mut()
.enumerate()
.for_each(|(idx, int)| *int = idx as _);
std::thread::scope(|s| {
some_ints
.iter()
.enumerate()
.for_each(|(idx, int)| {
s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
})
});Sourcepub fn from_mut_slice(v: &mut [usize]) -> &mut [Self]
๐ฌThis is a nightly-only experimental API. (atomic_from_mut #76314)
pub fn from_mut_slice(v: &mut [usize]) -> &mut [Self]
atomic_from_mut #76314)Get atomic access to a &mut [usize] slice.
Note: This function is only available on targets where AtomicUsize has the same alignment as usize.
ยงExamples
#![feature(atomic_from_mut)]
use std::sync::atomic::{AtomicUsize, Ordering};
let mut some_ints = [0; 10];
let a = &*AtomicUsize::from_mut_slice(&mut some_ints);
std::thread::scope(|s| {
for i in 0..a.len() {
s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
}
});
for (i, n) in some_ints.into_iter().enumerate() {
assert_eq!(i, n as usize);
}1.15.0 (const: 1.79.0) ยท Sourcepub const fn into_inner(self) -> usize
pub const fn into_inner(self) -> usize
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
ยงExamples
1.0.0 ยท Sourcepub fn load(&self, order: Ordering) -> usize
pub fn load(&self, order: Ordering) -> usize
1.0.0 ยท Sourcepub fn store(&self, val: usize, order: Ordering)
pub fn store(&self, val: usize, order: Ordering)
1.0.0 ยท Sourcepub fn swap(&self, val: usize, order: Ordering) -> usize
pub fn swap(&self, val: usize, order: Ordering) -> usize
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.0.0 ยท Sourcepub fn compare_and_swap(
&self,
current: usize,
new: usize,
order: Ordering,
) -> usize
๐Deprecated since 1.50.0: Use compare_exchange or compare_exchange_weak instead
pub fn compare_and_swap( &self, current: usize, new: usize, order: Ordering, ) -> usize
compare_exchange or compare_exchange_weak insteadStores a value into the atomic integer if the current value is the same as
the current value.
The return value is always the previous value. If it is equal to current, then the
value was updated.
compare_and_swap also takes an Ordering argument which describes the memory
ordering of this operation. Notice that even when using AcqRel, the operation
might fail and hence just perform an Acquire load, but not have Release semantics.
Using Acquire makes the store part of this operation Relaxed if it
happens, and using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงMigrating to compare_exchange and compare_exchange_weak
compare_and_swap is equivalent to compare_exchange with the following mapping for
memory orderings:
| Original | Success | Failure |
|---|---|---|
| Relaxed | Relaxed | Relaxed |
| Acquire | Acquire | Acquire |
| Release | Release | Relaxed |
| AcqRel | AcqRel | Acquire |
| SeqCst | SeqCst | SeqCst |
compare_and_swap and compare_exchange also differ in their return type. You can use
compare_exchange(...).unwrap_or_else(|x| x) to recover the behavior of compare_and_swap,
but in most cases it is more idiomatic to check whether the return value is Ok or Err
rather than to infer success vs failure based on the value that was read.
During migration, consider whether it makes sense to use compare_exchange_weak instead.
compare_exchange_weak is allowed to fail spuriously even when the comparison succeeds,
which allows the compiler to generate better assembly code when the compare and swap
is used in a loop.
ยงExamples
use std::sync::atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
assert_eq!(some_var.load(Ordering::Relaxed), 10);1.10.0 ยท Sourcepub fn compare_exchange(
&self,
current: usize,
new: usize,
success: Ordering,
failure: Ordering,
) -> Result<usize, usize>
pub fn compare_exchange( &self, current: usize, new: usize, success: Ordering, failure: Ordering, ) -> Result<usize, usize>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
use std::sync::atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.compare_exchange(5, 10,
Ordering::Acquire,
Ordering::Relaxed),
Ok(5));
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(some_var.compare_exchange(6, 12,
Ordering::SeqCst,
Ordering::Acquire),
Err(10));
assert_eq!(some_var.load(Ordering::Relaxed), 10);ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim! This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.10.0 ยท Sourcepub fn compare_exchange_weak(
&self,
current: usize,
new: usize,
success: Ordering,
failure: Ordering,
) -> Result<usize, usize>
pub fn compare_exchange_weak( &self, current: usize, new: usize, success: Ordering, failure: Ordering, ) -> Result<usize, usize>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike AtomicUsize::compare_exchange,
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the successful load
Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
use std::sync::atomic::{AtomicUsize, Ordering};
let val = AtomicUsize::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}ยงConsiderations
compare_exchange is a compare-and-swap operation and thus exhibits the usual downsides
of CAS operations. In particular, a load of the value followed by a successful
compare_exchange with the previous load does not ensure that other threads have not
changed the value in the interim. This is usually important when the equality check in
the compare_exchange is being used to check the identity of a value, but equality
does not necessarily imply identity. This is a particularly common case for pointers, as
a pointer holding the same address does not imply that the same object exists at that
address! In this case, compare_exchange can lead to the ABA problem.
1.0.0 ยท Sourcepub fn fetch_add(&self, val: usize, order: Ordering) -> usize
pub fn fetch_add(&self, val: usize, order: Ordering) -> usize
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_sub(&self, val: usize, order: Ordering) -> usize
pub fn fetch_sub(&self, val: usize, order: Ordering) -> usize
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_and(&self, val: usize, order: Ordering) -> usize
pub fn fetch_and(&self, val: usize, order: Ordering) -> usize
Bitwise โandโ with the current value.
Performs a bitwise โandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.27.0 ยท Sourcepub fn fetch_nand(&self, val: usize, order: Ordering) -> usize
pub fn fetch_nand(&self, val: usize, order: Ordering) -> usize
Bitwise โnandโ with the current value.
Performs a bitwise โnandโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_or(&self, val: usize, order: Ordering) -> usize
pub fn fetch_or(&self, val: usize, order: Ordering) -> usize
Bitwise โorโ with the current value.
Performs a bitwise โorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.0.0 ยท Sourcepub fn fetch_xor(&self, val: usize, order: Ordering) -> usize
pub fn fetch_xor(&self, val: usize, order: Ordering) -> usize
Bitwise โxorโ with the current value.
Performs a bitwise โxorโ operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<usize, usize>
๐Deprecating in 1.99.0: renamed to try_update for consistency
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<usize, usize>
try_update for consistencyAn alias for
AtomicUsize::try_update
.
1.96.0 ยท Sourcepub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(usize) -> Option<usize>,
) -> Result<usize, usize>
pub fn try_update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(usize) -> Option<usize>, ) -> Result<usize, usize>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
See also: update.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
try_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicUsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
use std::sync::atomic::{AtomicUsize, Ordering};
let x = AtomicUsize::new(7);
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);1.96.0 ยท Sourcepub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(usize) -> usize,
) -> usize
pub fn update( &self, set_order: Ordering, fetch_order: Ordering, f: impl FnMut(usize) -> usize, ) -> usize
Fetches the value, applies a function to it that it return a new value. The new value is stored and the old value is returned.
See also: try_update.
Note: This may call the function multiple times if the value has been changed from other threads in the meantime, but the function will have been applied only once to the stored value.
update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
AtomicUsize::compare_exchange
respectively.
Using Acquire as success ordering makes the store part
of this operation Relaxed, and using Release makes the final successful load
Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงConsiderations
This method is not magic; it is not provided by the hardware, and does not act like a critical section or mutex.
It is implemented on top of an atomic compare-and-swap operation, and thus is subject to the usual drawbacks of CAS operations. In particular, be careful of the ABA problem if this atomic integer is an index or more generally if knowledge of only the bitwise value of the atomic is not in and of itself sufficient to ensure any required preconditions.
ยงExamples
1.45.0 ยท Sourcepub fn fetch_max(&self, val: usize, order: Ordering) -> usize
pub fn fetch_max(&self, val: usize, order: Ordering) -> usize
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
use std::sync::atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
1.45.0 ยท Sourcepub fn fetch_min(&self, val: usize, order: Ordering) -> usize
pub fn fetch_min(&self, val: usize, order: Ordering) -> usize
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire makes the store part of this operation Relaxed, and
using Release makes the load part Relaxed.
Note: This method is only available on platforms that support atomic operations on
usize.
ยงExamples
use std::sync::atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
1.70.0 (const: 1.70.0) ยท Sourcepub const fn as_ptr(&self) -> *mut usize
pub const fn as_ptr(&self) -> *mut usize
Returns a mutable pointer to the underlying integer.
Doing non-atomic reads and writes on the resulting integer can be a data race.
This method is mostly useful for FFI, where the function signature may use
*mut usize instead of &AtomicUsize.
Returning an *mut pointer from a shared reference to this atomic is safe because the
atomic types work with interior mutability. All modifications of an atomic change the value
through a shared reference, and can do so safely as long as they use atomic operations. Any
use of the returned raw pointer requires an unsafe block and still has to uphold the
requirements of the memory model.
ยงExamples
use std::sync::atomic::AtomicUsize;
extern "C" {
fn my_atomic_op(arg: *mut usize);
}
let atomic = AtomicUsize::new(1);
// SAFETY: Safe as long as `my_atomic_op` is atomic.
unsafe {
my_atomic_op(atomic.as_ptr());
}