pub struct Lrc<T, A = Global>{
ptr: NonNull<ArcInner<T>>,
phantom: PhantomData<ArcInner<T>>,
alloc: A,
}
Expand description
A thread-safe reference-counting pointer. ‘Arc’ stands for ‘Atomically Reference Counted’.
The type Arc<T>
provides shared ownership of a value of type T
,
allocated in the heap. Invoking clone
on Arc
produces
a new Arc
instance, which points to the same allocation on the heap as the
source Arc
, while increasing a reference count. When the last Arc
pointer to a given allocation is destroyed, the value stored in that allocation (often
referred to as “inner value”) is also dropped.
Shared references in Rust disallow mutation by default, and Arc
is no
exception: you cannot generally obtain a mutable reference to something
inside an Arc
. If you need to mutate through an Arc
, use
Mutex
, RwLock
, or one of the Atomic
types.
Note: This type is only available on platforms that support atomic
loads and stores of pointers, which includes all platforms that support
the std
crate but not all those which only support alloc
.
This may be detected at compile time using #[cfg(target_has_atomic = "ptr")]
.
§Thread Safety
Unlike Rc<T>
, Arc<T>
uses atomic operations for its reference
counting. This means that it is thread-safe. The disadvantage is that
atomic operations are more expensive than ordinary memory accesses. If you
are not sharing reference-counted allocations between threads, consider using
Rc<T>
for lower overhead. Rc<T>
is a safe default, because the
compiler will catch any attempt to send an Rc<T>
between threads.
However, a library might choose Arc<T>
in order to give library consumers
more flexibility.
Arc<T>
will implement Send
and Sync
as long as the T
implements
Send
and Sync
. Why can’t you put a non-thread-safe type T
in an
Arc<T>
to make it thread-safe? This may be a bit counter-intuitive at
first: after all, isn’t the point of Arc<T>
thread safety? The key is
this: Arc<T>
makes it thread safe to have multiple ownership of the same
data, but it doesn’t add thread safety to its data. Consider
Arc<RefCell<T>>
. RefCell<T>
isn’t Sync
, and if Arc<T>
was always
Send
, Arc<RefCell<T>>
would be as well. But then we’d have a problem:
RefCell<T>
is not thread safe; it keeps track of the borrowing count using
non-atomic operations.
In the end, this means that you may need to pair Arc<T>
with some sort of
std::sync
type, usually Mutex<T>
.
§Breaking cycles with Weak
The downgrade
method can be used to create a non-owning
Weak
pointer. A Weak
pointer can be upgrade
d
to an Arc
, but this will return None
if the value stored in the allocation has
already been dropped. In other words, Weak
pointers do not keep the value
inside the allocation alive; however, they do keep the allocation
(the backing store for the value) alive.
A cycle between Arc
pointers will never be deallocated. For this reason,
Weak
is used to break cycles. For example, a tree could have
strong Arc
pointers from parent nodes to children, and Weak
pointers from children back to their parents.
§Cloning references
Creating a new reference from an existing reference-counted pointer is done using the
Clone
trait implemented for Arc<T>
and Weak<T>
.
use std::sync::Arc;
let foo = Arc::new(vec![1.0, 2.0, 3.0]);
// The two syntaxes below are equivalent.
let a = foo.clone();
let b = Arc::clone(&foo);
// a, b, and foo are all Arcs that point to the same memory location
§Deref
behavior
Arc<T>
automatically dereferences to T
(via the Deref
trait),
so you can call T
’s methods on a value of type Arc<T>
. To avoid name
clashes with T
’s methods, the methods of Arc<T>
itself are associated
functions, called using fully qualified syntax:
use std::sync::Arc;
let my_arc = Arc::new(());
let my_weak = Arc::downgrade(&my_arc);
Arc<T>
’s implementations of traits like Clone
may also be called using
fully qualified syntax. Some people prefer to use fully qualified syntax,
while others prefer using method-call syntax.
use std::sync::Arc;
let arc = Arc::new(());
// Method-call syntax
let arc2 = arc.clone();
// Fully qualified syntax
let arc3 = Arc::clone(&arc);
Weak<T>
does not auto-dereference to T
, because the inner value may have
already been dropped.
§Examples
Sharing some immutable data between threads:
use std::sync::Arc;
use std::thread;
let five = Arc::new(5);
for _ in 0..10 {
let five = Arc::clone(&five);
thread::spawn(move || {
println!("{five:?}");
});
}
Sharing a mutable AtomicUsize
:
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::thread;
let val = Arc::new(AtomicUsize::new(5));
for _ in 0..10 {
let val = Arc::clone(&val);
thread::spawn(move || {
let v = val.fetch_add(1, Ordering::Relaxed);
println!("{v:?}");
});
}
See the rc
documentation for more examples of reference
counting in general.
Fields§
§ptr: NonNull<ArcInner<T>>
§phantom: PhantomData<ArcInner<T>>
§alloc: A
Implementations§
source§impl<T> Arc<T>
impl<T> Arc<T>
1.60.0 · sourcepub fn new_cyclic<F>(data_fn: F) -> Arc<T>
pub fn new_cyclic<F>(data_fn: F) -> Arc<T>
Constructs a new Arc<T>
while giving you a Weak<T>
to the allocation,
to allow you to construct a T
which holds a weak pointer to itself.
Generally, a structure circularly referencing itself, either directly or
indirectly, should not hold a strong reference to itself to prevent a memory leak.
Using this function, you get access to the weak pointer during the
initialization of T
, before the Arc<T>
is created, such that you can
clone and store it inside the T
.
new_cyclic
first allocates the managed allocation for the Arc<T>
,
then calls your closure, giving it a Weak<T>
to this allocation,
and only afterwards completes the construction of the Arc<T>
by placing
the T
returned from your closure into the allocation.
Since the new Arc<T>
is not fully-constructed until Arc<T>::new_cyclic
returns, calling upgrade
on the weak reference inside your closure will
fail and result in a None
value.
§Panics
If data_fn
panics, the panic is propagated to the caller, and the
temporary Weak<T>
is dropped normally.
§Example
use std::sync::{Arc, Weak};
struct Gadget {
me: Weak<Gadget>,
}
impl Gadget {
/// Constructs a reference counted Gadget.
fn new() -> Arc<Self> {
// `me` is a `Weak<Gadget>` pointing at the new allocation of the
// `Arc` we're constructing.
Arc::new_cyclic(|me| {
// Create the actual struct here.
Gadget { me: me.clone() }
})
}
/// Returns a reference counted pointer to Self.
fn me(&self) -> Arc<Self> {
self.me.upgrade().unwrap()
}
}
1.82.0 · sourcepub fn new_uninit() -> Arc<MaybeUninit<T>>
pub fn new_uninit() -> Arc<MaybeUninit<T>>
Constructs a new Arc
with uninitialized contents.
§Examples
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let mut five = Arc::<u32>::new_uninit();
// Deferred initialization:
Arc::get_mut(&mut five).unwrap().write(5);
let five = unsafe { five.assume_init() };
assert_eq!(*five, 5)
sourcepub fn new_zeroed() -> Arc<MaybeUninit<T>>
🔬This is a nightly-only experimental API. (new_zeroed_alloc
)
pub fn new_zeroed() -> Arc<MaybeUninit<T>>
new_zeroed_alloc
)Constructs a new Arc
with uninitialized contents, with the memory
being filled with 0
bytes.
See MaybeUninit::zeroed
for examples of correct and incorrect usage
of this method.
§Examples
#![feature(new_zeroed_alloc)]
use std::sync::Arc;
let zero = Arc::<u32>::new_zeroed();
let zero = unsafe { zero.assume_init() };
assert_eq!(*zero, 0)
1.33.0 · sourcepub fn pin(data: T) -> Pin<Arc<T>>
pub fn pin(data: T) -> Pin<Arc<T>>
Constructs a new Pin<Arc<T>>
. If T
does not implement Unpin
, then
data
will be pinned in memory and unable to be moved.
sourcepub fn try_pin(data: T) -> Result<Pin<Arc<T>>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_pin(data: T) -> Result<Pin<Arc<T>>, AllocError>
allocator_api
)Constructs a new Pin<Arc<T>>
, return an error if allocation fails.
sourcepub fn try_new(data: T) -> Result<Arc<T>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_new(data: T) -> Result<Arc<T>, AllocError>
allocator_api
)Constructs a new Arc<T>
, returning an error if allocation fails.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
let five = Arc::try_new(5)?;
sourcepub fn try_new_uninit() -> Result<Arc<MaybeUninit<T>>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_new_uninit() -> Result<Arc<MaybeUninit<T>>, AllocError>
allocator_api
)Constructs a new Arc
with uninitialized contents, returning an error
if allocation fails.
§Examples
#![feature(allocator_api)]
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let mut five = Arc::<u32>::try_new_uninit()?;
// Deferred initialization:
Arc::get_mut(&mut five).unwrap().write(5);
let five = unsafe { five.assume_init() };
assert_eq!(*five, 5);
sourcepub fn try_new_zeroed() -> Result<Arc<MaybeUninit<T>>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_new_zeroed() -> Result<Arc<MaybeUninit<T>>, AllocError>
allocator_api
)Constructs a new Arc
with uninitialized contents, with the memory
being filled with 0
bytes, returning an error if allocation fails.
See MaybeUninit::zeroed
for examples of correct and incorrect usage
of this method.
§Examples
#![feature( allocator_api)]
use std::sync::Arc;
let zero = Arc::<u32>::try_new_zeroed()?;
let zero = unsafe { zero.assume_init() };
assert_eq!(*zero, 0);
source§impl<T, A> Arc<T, A>where
A: Allocator,
impl<T, A> Arc<T, A>where
A: Allocator,
sourcepub fn new_in(data: T, alloc: A) -> Arc<T, A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn new_in(data: T, alloc: A) -> Arc<T, A>
allocator_api
)Constructs a new Arc<T>
in the provided allocator.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let five = Arc::new_in(5, System);
sourcepub fn new_uninit_in(alloc: A) -> Arc<MaybeUninit<T>, A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn new_uninit_in(alloc: A) -> Arc<MaybeUninit<T>, A>
allocator_api
)Constructs a new Arc
with uninitialized contents in the provided allocator.
§Examples
#![feature(get_mut_unchecked)]
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let mut five = Arc::<u32, _>::new_uninit_in(System);
let five = unsafe {
// Deferred initialization:
Arc::get_mut_unchecked(&mut five).as_mut_ptr().write(5);
five.assume_init()
};
assert_eq!(*five, 5)
sourcepub fn new_zeroed_in(alloc: A) -> Arc<MaybeUninit<T>, A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn new_zeroed_in(alloc: A) -> Arc<MaybeUninit<T>, A>
allocator_api
)Constructs a new Arc
with uninitialized contents, with the memory
being filled with 0
bytes, in the provided allocator.
See MaybeUninit::zeroed
for examples of correct and incorrect usage
of this method.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let zero = Arc::<u32, _>::new_zeroed_in(System);
let zero = unsafe { zero.assume_init() };
assert_eq!(*zero, 0)
sourcepub fn new_cyclic_in<F>(data_fn: F, alloc: A) -> Arc<T, A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn new_cyclic_in<F>(data_fn: F, alloc: A) -> Arc<T, A>
allocator_api
)Constructs a new Arc<T, A>
in the given allocator while giving you a Weak<T, A>
to the allocation,
to allow you to construct a T
which holds a weak pointer to itself.
Generally, a structure circularly referencing itself, either directly or
indirectly, should not hold a strong reference to itself to prevent a memory leak.
Using this function, you get access to the weak pointer during the
initialization of T
, before the Arc<T, A>
is created, such that you can
clone and store it inside the T
.
new_cyclic_in
first allocates the managed allocation for the Arc<T, A>
,
then calls your closure, giving it a Weak<T, A>
to this allocation,
and only afterwards completes the construction of the Arc<T, A>
by placing
the T
returned from your closure into the allocation.
Since the new Arc<T, A>
is not fully-constructed until Arc<T, A>::new_cyclic_in
returns, calling upgrade
on the weak reference inside your closure will
fail and result in a None
value.
§Panics
If data_fn
panics, the panic is propagated to the caller, and the
temporary Weak<T>
is dropped normally.
§Example
See new_cyclic
sourcepub fn pin_in(data: T, alloc: A) -> Pin<Arc<T, A>>where
A: 'static,
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn pin_in(data: T, alloc: A) -> Pin<Arc<T, A>>where
A: 'static,
allocator_api
)Constructs a new Pin<Arc<T, A>>
in the provided allocator. If T
does not implement Unpin
,
then data
will be pinned in memory and unable to be moved.
sourcepub fn try_pin_in(data: T, alloc: A) -> Result<Pin<Arc<T, A>>, AllocError>where
A: 'static,
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_pin_in(data: T, alloc: A) -> Result<Pin<Arc<T, A>>, AllocError>where
A: 'static,
allocator_api
)Constructs a new Pin<Arc<T, A>>
in the provided allocator, return an error if allocation
fails.
sourcepub fn try_new_in(data: T, alloc: A) -> Result<Arc<T, A>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_new_in(data: T, alloc: A) -> Result<Arc<T, A>, AllocError>
allocator_api
)Constructs a new Arc<T, A>
in the provided allocator, returning an error if allocation fails.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let five = Arc::try_new_in(5, System)?;
sourcepub fn try_new_uninit_in(alloc: A) -> Result<Arc<MaybeUninit<T>, A>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_new_uninit_in(alloc: A) -> Result<Arc<MaybeUninit<T>, A>, AllocError>
allocator_api
)Constructs a new Arc
with uninitialized contents, in the provided allocator, returning an
error if allocation fails.
§Examples
#![feature(allocator_api)]
#![feature(get_mut_unchecked)]
use std::sync::Arc;
use std::alloc::System;
let mut five = Arc::<u32, _>::try_new_uninit_in(System)?;
let five = unsafe {
// Deferred initialization:
Arc::get_mut_unchecked(&mut five).as_mut_ptr().write(5);
five.assume_init()
};
assert_eq!(*five, 5);
sourcepub fn try_new_zeroed_in(alloc: A) -> Result<Arc<MaybeUninit<T>, A>, AllocError>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn try_new_zeroed_in(alloc: A) -> Result<Arc<MaybeUninit<T>, A>, AllocError>
allocator_api
)Constructs a new Arc
with uninitialized contents, with the memory
being filled with 0
bytes, in the provided allocator, returning an error if allocation
fails.
See MaybeUninit::zeroed
for examples of correct and incorrect usage
of this method.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let zero = Arc::<u32, _>::try_new_zeroed_in(System)?;
let zero = unsafe { zero.assume_init() };
assert_eq!(*zero, 0);
1.4.0 · sourcepub fn try_unwrap(this: Arc<T, A>) -> Result<T, Arc<T, A>>
pub fn try_unwrap(this: Arc<T, A>) -> Result<T, Arc<T, A>>
Returns the inner value, if the Arc
has exactly one strong reference.
Otherwise, an Err
is returned with the same Arc
that was
passed in.
This will succeed even if there are outstanding weak references.
It is strongly recommended to use Arc::into_inner
instead if you don’t
keep the Arc
in the Err
case.
Immediately dropping the Err
-value, as the expression
Arc::try_unwrap(this).ok()
does, can cause the strong count to
drop to zero and the inner value of the Arc
to be dropped.
For instance, if two threads execute such an expression in parallel,
there is a race condition without the possibility of unsafety:
The threads could first both check whether they own the last instance
in Arc::try_unwrap
, determine that they both do not, and then both
discard and drop their instance in the call to ok
.
In this scenario, the value inside the Arc
is safely destroyed
by exactly one of the threads, but neither thread will ever be able
to use the value.
§Examples
use std::sync::Arc;
let x = Arc::new(3);
assert_eq!(Arc::try_unwrap(x), Ok(3));
let x = Arc::new(4);
let _y = Arc::clone(&x);
assert_eq!(*Arc::try_unwrap(x).unwrap_err(), 4);
1.70.0 · sourcepub fn into_inner(this: Arc<T, A>) -> Option<T>
pub fn into_inner(this: Arc<T, A>) -> Option<T>
Returns the inner value, if the Arc
has exactly one strong reference.
Otherwise, None
is returned and the Arc
is dropped.
This will succeed even if there are outstanding weak references.
If Arc::into_inner
is called on every clone of this Arc
,
it is guaranteed that exactly one of the calls returns the inner value.
This means in particular that the inner value is not dropped.
Arc::try_unwrap
is conceptually similar to Arc::into_inner
, but it
is meant for different use-cases. If used as a direct replacement
for Arc::into_inner
anyway, such as with the expression
Arc::try_unwrap(this).ok()
, then it does
not give the same guarantee as described in the previous paragraph.
For more information, see the examples below and read the documentation
of Arc::try_unwrap
.
§Examples
Minimal example demonstrating the guarantee that Arc::into_inner
gives.
use std::sync::Arc;
let x = Arc::new(3);
let y = Arc::clone(&x);
// Two threads calling `Arc::into_inner` on both clones of an `Arc`:
let x_thread = std::thread::spawn(|| Arc::into_inner(x));
let y_thread = std::thread::spawn(|| Arc::into_inner(y));
let x_inner_value = x_thread.join().unwrap();
let y_inner_value = y_thread.join().unwrap();
// One of the threads is guaranteed to receive the inner value:
assert!(matches!(
(x_inner_value, y_inner_value),
(None, Some(3)) | (Some(3), None)
));
// The result could also be `(None, None)` if the threads called
// `Arc::try_unwrap(x).ok()` and `Arc::try_unwrap(y).ok()` instead.
A more practical example demonstrating the need for Arc::into_inner
:
use std::sync::Arc;
// Definition of a simple singly linked list using `Arc`:
#[derive(Clone)]
struct LinkedList<T>(Option<Arc<Node<T>>>);
struct Node<T>(T, Option<Arc<Node<T>>>);
// Dropping a long `LinkedList<T>` relying on the destructor of `Arc`
// can cause a stack overflow. To prevent this, we can provide a
// manual `Drop` implementation that does the destruction in a loop:
impl<T> Drop for LinkedList<T> {
fn drop(&mut self) {
let mut link = self.0.take();
while let Some(arc_node) = link.take() {
if let Some(Node(_value, next)) = Arc::into_inner(arc_node) {
link = next;
}
}
}
}
// Implementation of `new` and `push` omitted
impl<T> LinkedList<T> {
/* ... */
}
// The following code could have still caused a stack overflow
// despite the manual `Drop` impl if that `Drop` impl had used
// `Arc::try_unwrap(arc).ok()` instead of `Arc::into_inner(arc)`.
// Create a long list and clone it
let mut x = LinkedList::new();
let size = 100000;
for i in 0..size {
x.push(i); // Adds i to the front of x
}
let y = x.clone();
// Drop the clones in parallel
let x_thread = std::thread::spawn(|| drop(x));
let y_thread = std::thread::spawn(|| drop(y));
x_thread.join().unwrap();
y_thread.join().unwrap();
source§impl<T> Arc<[T]>
impl<T> Arc<[T]>
1.82.0 · sourcepub fn new_uninit_slice(len: usize) -> Arc<[MaybeUninit<T>]>
pub fn new_uninit_slice(len: usize) -> Arc<[MaybeUninit<T>]>
Constructs a new atomically reference-counted slice with uninitialized contents.
§Examples
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let mut values = Arc::<[u32]>::new_uninit_slice(3);
// Deferred initialization:
let data = Arc::get_mut(&mut values).unwrap();
data[0].write(1);
data[1].write(2);
data[2].write(3);
let values = unsafe { values.assume_init() };
assert_eq!(*values, [1, 2, 3])
sourcepub fn new_zeroed_slice(len: usize) -> Arc<[MaybeUninit<T>]>
🔬This is a nightly-only experimental API. (new_zeroed_alloc
)
pub fn new_zeroed_slice(len: usize) -> Arc<[MaybeUninit<T>]>
new_zeroed_alloc
)Constructs a new atomically reference-counted slice with uninitialized contents, with the memory being
filled with 0
bytes.
See MaybeUninit::zeroed
for examples of correct and
incorrect usage of this method.
§Examples
#![feature(new_zeroed_alloc)]
use std::sync::Arc;
let values = Arc::<[u32]>::new_zeroed_slice(3);
let values = unsafe { values.assume_init() };
assert_eq!(*values, [0, 0, 0])
source§impl<T, A> Arc<[T], A>where
A: Allocator,
impl<T, A> Arc<[T], A>where
A: Allocator,
sourcepub fn new_uninit_slice_in(len: usize, alloc: A) -> Arc<[MaybeUninit<T>], A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn new_uninit_slice_in(len: usize, alloc: A) -> Arc<[MaybeUninit<T>], A>
allocator_api
)Constructs a new atomically reference-counted slice with uninitialized contents in the provided allocator.
§Examples
#![feature(get_mut_unchecked)]
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let mut values = Arc::<[u32], _>::new_uninit_slice_in(3, System);
let values = unsafe {
// Deferred initialization:
Arc::get_mut_unchecked(&mut values)[0].as_mut_ptr().write(1);
Arc::get_mut_unchecked(&mut values)[1].as_mut_ptr().write(2);
Arc::get_mut_unchecked(&mut values)[2].as_mut_ptr().write(3);
values.assume_init()
};
assert_eq!(*values, [1, 2, 3])
sourcepub fn new_zeroed_slice_in(len: usize, alloc: A) -> Arc<[MaybeUninit<T>], A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn new_zeroed_slice_in(len: usize, alloc: A) -> Arc<[MaybeUninit<T>], A>
allocator_api
)Constructs a new atomically reference-counted slice with uninitialized contents, with the memory being
filled with 0
bytes, in the provided allocator.
See MaybeUninit::zeroed
for examples of correct and
incorrect usage of this method.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let values = Arc::<[u32], _>::new_zeroed_slice_in(3, System);
let values = unsafe { values.assume_init() };
assert_eq!(*values, [0, 0, 0])
source§impl<T, A> Arc<MaybeUninit<T>, A>where
A: Allocator,
impl<T, A> Arc<MaybeUninit<T>, A>where
A: Allocator,
1.82.0 · sourcepub unsafe fn assume_init(self) -> Arc<T, A>
pub unsafe fn assume_init(self) -> Arc<T, A>
Converts to Arc<T>
.
§Safety
As with MaybeUninit::assume_init
,
it is up to the caller to guarantee that the inner value
really is in an initialized state.
Calling this when the content is not yet fully initialized
causes immediate undefined behavior.
§Examples
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let mut five = Arc::<u32>::new_uninit();
// Deferred initialization:
Arc::get_mut(&mut five).unwrap().write(5);
let five = unsafe { five.assume_init() };
assert_eq!(*five, 5)
source§impl<T, A> Arc<[MaybeUninit<T>], A>where
A: Allocator,
impl<T, A> Arc<[MaybeUninit<T>], A>where
A: Allocator,
1.82.0 · sourcepub unsafe fn assume_init(self) -> Arc<[T], A>
pub unsafe fn assume_init(self) -> Arc<[T], A>
Converts to Arc<[T]>
.
§Safety
As with MaybeUninit::assume_init
,
it is up to the caller to guarantee that the inner value
really is in an initialized state.
Calling this when the content is not yet fully initialized
causes immediate undefined behavior.
§Examples
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let mut values = Arc::<[u32]>::new_uninit_slice(3);
// Deferred initialization:
let data = Arc::get_mut(&mut values).unwrap();
data[0].write(1);
data[1].write(2);
data[2].write(3);
let values = unsafe { values.assume_init() };
assert_eq!(*values, [1, 2, 3])
source§impl<T> Arc<T>where
T: ?Sized,
impl<T> Arc<T>where
T: ?Sized,
1.17.0 · sourcepub unsafe fn from_raw(ptr: *const T) -> Arc<T>
pub unsafe fn from_raw(ptr: *const T) -> Arc<T>
Constructs an Arc<T>
from a raw pointer.
The raw pointer must have been previously returned by a call to
Arc<U>::into_raw
with the following requirements:
- If
U
is sized, it must have the same size and alignment asT
. This is trivially true ifU
isT
. - If
U
is unsized, its data pointer must have the same size and alignment asT
. This is trivially true ifArc<U>
was constructed throughArc<T>
and then converted toArc<U>
through an unsized coercion.
Note that if U
or U
’s data pointer is not T
but has the same size
and alignment, this is basically like transmuting references of
different types. See mem::transmute
for more information
on what restrictions apply in this case.
The user of from_raw
has to make sure a specific value of T
is only
dropped once.
This function is unsafe because improper use may lead to memory unsafety,
even if the returned Arc<T>
is never accessed.
§Examples
use std::sync::Arc;
let x = Arc::new("hello".to_owned());
let x_ptr = Arc::into_raw(x);
unsafe {
// Convert back to an `Arc` to prevent leak.
let x = Arc::from_raw(x_ptr);
assert_eq!(&*x, "hello");
// Further calls to `Arc::from_raw(x_ptr)` would be memory-unsafe.
}
// The memory was freed when `x` went out of scope above, so `x_ptr` is now dangling!
Convert a slice back into its original array:
use std::sync::Arc;
let x: Arc<[u32]> = Arc::new([1, 2, 3]);
let x_ptr: *const [u32] = Arc::into_raw(x);
unsafe {
let x: Arc<[u32; 3]> = Arc::from_raw(x_ptr.cast::<[u32; 3]>());
assert_eq!(&*x, &[1, 2, 3]);
}
1.51.0 · sourcepub unsafe fn increment_strong_count(ptr: *const T)
pub unsafe fn increment_strong_count(ptr: *const T)
Increments the strong reference count on the Arc<T>
associated with the
provided pointer by one.
§Safety
The pointer must have been obtained through Arc::into_raw
, and the
associated Arc
instance must be valid (i.e. the strong count must be at
least 1) for the duration of this method.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
unsafe {
let ptr = Arc::into_raw(five);
Arc::increment_strong_count(ptr);
// This assertion is deterministic because we haven't shared
// the `Arc` between threads.
let five = Arc::from_raw(ptr);
assert_eq!(2, Arc::strong_count(&five));
}
1.51.0 · sourcepub unsafe fn decrement_strong_count(ptr: *const T)
pub unsafe fn decrement_strong_count(ptr: *const T)
Decrements the strong reference count on the Arc<T>
associated with the
provided pointer by one.
§Safety
The pointer must have been obtained through Arc::into_raw
, and the
associated Arc
instance must be valid (i.e. the strong count must be at
least 1) when invoking this method. This method can be used to release the final
Arc
and backing storage, but should not be called after the final Arc
has been
released.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
unsafe {
let ptr = Arc::into_raw(five);
Arc::increment_strong_count(ptr);
// Those assertions are deterministic because we haven't shared
// the `Arc` between threads.
let five = Arc::from_raw(ptr);
assert_eq!(2, Arc::strong_count(&five));
Arc::decrement_strong_count(ptr);
assert_eq!(1, Arc::strong_count(&five));
}
source§impl<T, A> Arc<T, A>
impl<T, A> Arc<T, A>
sourcepub fn allocator(this: &Arc<T, A>) -> &A
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn allocator(this: &Arc<T, A>) -> &A
allocator_api
)Returns a reference to the underlying allocator.
Note: this is an associated function, which means that you have
to call it as Arc::allocator(&a)
instead of a.allocator()
. This
is so that there is no conflict with a method on the inner type.
1.17.0 · sourcepub fn into_raw(this: Arc<T, A>) -> *const T
pub fn into_raw(this: Arc<T, A>) -> *const T
Consumes the Arc
, returning the wrapped pointer.
To avoid a memory leak the pointer must be converted back to an Arc
using
Arc::from_raw
.
§Examples
use std::sync::Arc;
let x = Arc::new("hello".to_owned());
let x_ptr = Arc::into_raw(x);
assert_eq!(unsafe { &*x_ptr }, "hello");
sourcepub fn into_raw_with_allocator(this: Arc<T, A>) -> (*const T, A)
🔬This is a nightly-only experimental API. (allocator_api
)
pub fn into_raw_with_allocator(this: Arc<T, A>) -> (*const T, A)
allocator_api
)Consumes the Arc
, returning the wrapped pointer and allocator.
To avoid a memory leak the pointer must be converted back to an Arc
using
Arc::from_raw_in
.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let x = Arc::new_in("hello".to_owned(), System);
let (ptr, alloc) = Arc::into_raw_with_allocator(x);
assert_eq!(unsafe { &*ptr }, "hello");
let x = unsafe { Arc::from_raw_in(ptr, alloc) };
assert_eq!(&*x, "hello");
1.45.0 · sourcepub fn as_ptr(this: &Arc<T, A>) -> *const T
pub fn as_ptr(this: &Arc<T, A>) -> *const T
Provides a raw pointer to the data.
The counts are not affected in any way and the Arc
is not consumed. The pointer is valid for
as long as there are strong counts in the Arc
.
§Examples
use std::sync::Arc;
let x = Arc::new("hello".to_owned());
let y = Arc::clone(&x);
let x_ptr = Arc::as_ptr(&x);
assert_eq!(x_ptr, Arc::as_ptr(&y));
assert_eq!(unsafe { &*x_ptr }, "hello");
sourcepub unsafe fn from_raw_in(ptr: *const T, alloc: A) -> Arc<T, A>
🔬This is a nightly-only experimental API. (allocator_api
)
pub unsafe fn from_raw_in(ptr: *const T, alloc: A) -> Arc<T, A>
allocator_api
)Constructs an Arc<T, A>
from a raw pointer.
The raw pointer must have been previously returned by a call to Arc<U, A>::into_raw
with the following requirements:
- If
U
is sized, it must have the same size and alignment asT
. This is trivially true ifU
isT
. - If
U
is unsized, its data pointer must have the same size and alignment asT
. This is trivially true ifArc<U>
was constructed throughArc<T>
and then converted toArc<U>
through an unsized coercion.
Note that if U
or U
’s data pointer is not T
but has the same size
and alignment, this is basically like transmuting references of
different types. See mem::transmute
for more information
on what restrictions apply in this case.
The raw pointer must point to a block of memory allocated by alloc
The user of from_raw
has to make sure a specific value of T
is only
dropped once.
This function is unsafe because improper use may lead to memory unsafety,
even if the returned Arc<T>
is never accessed.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let x = Arc::new_in("hello".to_owned(), System);
let x_ptr = Arc::into_raw(x);
unsafe {
// Convert back to an `Arc` to prevent leak.
let x = Arc::from_raw_in(x_ptr, System);
assert_eq!(&*x, "hello");
// Further calls to `Arc::from_raw(x_ptr)` would be memory-unsafe.
}
// The memory was freed when `x` went out of scope above, so `x_ptr` is now dangling!
Convert a slice back into its original array:
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let x: Arc<[u32], _> = Arc::new_in([1, 2, 3], System);
let x_ptr: *const [u32] = Arc::into_raw(x);
unsafe {
let x: Arc<[u32; 3], _> = Arc::from_raw_in(x_ptr.cast::<[u32; 3]>(), System);
assert_eq!(&*x, &[1, 2, 3]);
}
1.15.0 · sourcepub fn weak_count(this: &Arc<T, A>) -> usize
pub fn weak_count(this: &Arc<T, A>) -> usize
Gets the number of Weak
pointers to this allocation.
§Safety
This method by itself is safe, but using it correctly requires extra care. Another thread can change the weak count at any time, including potentially between calling this method and acting on the result.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
let _weak_five = Arc::downgrade(&five);
// This assertion is deterministic because we haven't shared
// the `Arc` or `Weak` between threads.
assert_eq!(1, Arc::weak_count(&five));
1.15.0 · sourcepub fn strong_count(this: &Arc<T, A>) -> usize
pub fn strong_count(this: &Arc<T, A>) -> usize
Gets the number of strong (Arc
) pointers to this allocation.
§Safety
This method by itself is safe, but using it correctly requires extra care. Another thread can change the strong count at any time, including potentially between calling this method and acting on the result.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
let _also_five = Arc::clone(&five);
// This assertion is deterministic because we haven't shared
// the `Arc` between threads.
assert_eq!(2, Arc::strong_count(&five));
sourcepub unsafe fn increment_strong_count_in(ptr: *const T, alloc: A)where
A: Clone,
🔬This is a nightly-only experimental API. (allocator_api
)
pub unsafe fn increment_strong_count_in(ptr: *const T, alloc: A)where
A: Clone,
allocator_api
)Increments the strong reference count on the Arc<T>
associated with the
provided pointer by one.
§Safety
The pointer must have been obtained through Arc::into_raw
, and the
associated Arc
instance must be valid (i.e. the strong count must be at
least 1) for the duration of this method,, and ptr
must point to a block of memory
allocated by alloc
.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let five = Arc::new_in(5, System);
unsafe {
let ptr = Arc::into_raw(five);
Arc::increment_strong_count_in(ptr, System);
// This assertion is deterministic because we haven't shared
// the `Arc` between threads.
let five = Arc::from_raw_in(ptr, System);
assert_eq!(2, Arc::strong_count(&five));
}
sourcepub unsafe fn decrement_strong_count_in(ptr: *const T, alloc: A)
🔬This is a nightly-only experimental API. (allocator_api
)
pub unsafe fn decrement_strong_count_in(ptr: *const T, alloc: A)
allocator_api
)Decrements the strong reference count on the Arc<T>
associated with the
provided pointer by one.
§Safety
The pointer must have been obtained through Arc::into_raw
, the
associated Arc
instance must be valid (i.e. the strong count must be at
least 1) when invoking this method, and ptr
must point to a block of memory
allocated by alloc
. This method can be used to release the final
Arc
and backing storage, but should not be called after the final Arc
has been
released.
§Examples
#![feature(allocator_api)]
use std::sync::Arc;
use std::alloc::System;
let five = Arc::new_in(5, System);
unsafe {
let ptr = Arc::into_raw(five);
Arc::increment_strong_count_in(ptr, System);
// Those assertions are deterministic because we haven't shared
// the `Arc` between threads.
let five = Arc::from_raw_in(ptr, System);
assert_eq!(2, Arc::strong_count(&five));
Arc::decrement_strong_count_in(ptr, System);
assert_eq!(1, Arc::strong_count(&five));
}
1.17.0 · sourcepub fn ptr_eq(this: &Arc<T, A>, other: &Arc<T, A>) -> bool
pub fn ptr_eq(this: &Arc<T, A>, other: &Arc<T, A>) -> bool
Returns true
if the two Arc
s point to the same allocation in a vein similar to
ptr::eq
. This function ignores the metadata of dyn Trait
pointers.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
let same_five = Arc::clone(&five);
let other_five = Arc::new(5);
assert!(Arc::ptr_eq(&five, &same_five));
assert!(!Arc::ptr_eq(&five, &other_five));
source§impl<T, A> Arc<T, A>
impl<T, A> Arc<T, A>
1.4.0 · sourcepub fn make_mut(this: &mut Arc<T, A>) -> &mut T
pub fn make_mut(this: &mut Arc<T, A>) -> &mut T
Makes a mutable reference into the given Arc
.
If there are other Arc
pointers to the same allocation, then make_mut
will
clone
the inner value to a new allocation to ensure unique ownership. This is also
referred to as clone-on-write.
However, if there are no other Arc
pointers to this allocation, but some Weak
pointers, then the Weak
pointers will be dissociated and the inner value will not
be cloned.
See also get_mut
, which will fail rather than cloning the inner value
or dissociating Weak
pointers.
§Examples
use std::sync::Arc;
let mut data = Arc::new(5);
*Arc::make_mut(&mut data) += 1; // Won't clone anything
let mut other_data = Arc::clone(&data); // Won't clone inner data
*Arc::make_mut(&mut data) += 1; // Clones inner data
*Arc::make_mut(&mut data) += 1; // Won't clone anything
*Arc::make_mut(&mut other_data) *= 2; // Won't clone anything
// Now `data` and `other_data` point to different allocations.
assert_eq!(*data, 8);
assert_eq!(*other_data, 12);
Weak
pointers will be dissociated:
use std::sync::Arc;
let mut data = Arc::new(75);
let weak = Arc::downgrade(&data);
assert!(75 == *data);
assert!(75 == *weak.upgrade().unwrap());
*Arc::make_mut(&mut data) += 1;
assert!(76 == *data);
assert!(weak.upgrade().is_none());
source§impl<T, A> Arc<T, A>
impl<T, A> Arc<T, A>
1.76.0 · sourcepub fn unwrap_or_clone(this: Arc<T, A>) -> T
pub fn unwrap_or_clone(this: Arc<T, A>) -> T
If we have the only reference to T
then unwrap it. Otherwise, clone T
and return the
clone.
Assuming arc_t
is of type Arc<T>
, this function is functionally equivalent to
(*arc_t).clone()
, but will avoid cloning the inner value where possible.
§Examples
let inner = String::from("test");
let ptr = inner.as_ptr();
let arc = Arc::new(inner);
let inner = Arc::unwrap_or_clone(arc);
// The inner value was not cloned
assert!(ptr::eq(ptr, inner.as_ptr()));
let arc = Arc::new(inner);
let arc2 = arc.clone();
let inner = Arc::unwrap_or_clone(arc);
// Because there were 2 references, we had to clone the inner value.
assert!(!ptr::eq(ptr, inner.as_ptr()));
// `arc2` is the last reference, so when we unwrap it we get back
// the original `String`.
let inner = Arc::unwrap_or_clone(arc2);
assert!(ptr::eq(ptr, inner.as_ptr()));
source§impl<T, A> Arc<T, A>
impl<T, A> Arc<T, A>
1.4.0 · sourcepub fn get_mut(this: &mut Arc<T, A>) -> Option<&mut T>
pub fn get_mut(this: &mut Arc<T, A>) -> Option<&mut T>
Returns a mutable reference into the given Arc
, if there are
no other Arc
or Weak
pointers to the same allocation.
Returns None
otherwise, because it is not safe to
mutate a shared value.
See also make_mut
, which will clone
the inner value when there are other Arc
pointers.
§Examples
use std::sync::Arc;
let mut x = Arc::new(3);
*Arc::get_mut(&mut x).unwrap() = 4;
assert_eq!(*x, 4);
let _y = Arc::clone(&x);
assert!(Arc::get_mut(&mut x).is_none());
sourcepub unsafe fn get_mut_unchecked(this: &mut Arc<T, A>) -> &mut T
🔬This is a nightly-only experimental API. (get_mut_unchecked
)
pub unsafe fn get_mut_unchecked(this: &mut Arc<T, A>) -> &mut T
get_mut_unchecked
)Returns a mutable reference into the given Arc
,
without any check.
See also get_mut
, which is safe and does appropriate checks.
§Safety
If any other Arc
or Weak
pointers to the same allocation exist, then
they must not be dereferenced or have active borrows for the duration
of the returned borrow, and their inner type must be exactly the same as the
inner type of this Rc (including lifetimes). This is trivially the case if no
such pointers exist, for example immediately after Arc::new
.
§Examples
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let mut x = Arc::new(String::new());
unsafe {
Arc::get_mut_unchecked(&mut x).push_str("foo")
}
assert_eq!(*x, "foo");
Other Arc
pointers to the same allocation must be to the same type.
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let x: Arc<str> = Arc::from("Hello, world!");
let mut y: Arc<[u8]> = x.clone().into();
unsafe {
// this is Undefined Behavior, because x's inner type is str, not [u8]
Arc::get_mut_unchecked(&mut y).fill(0xff); // 0xff is invalid in UTF-8
}
println!("{}", &*x); // Invalid UTF-8 in a str
Other Arc
pointers to the same allocation must be to the exact same type, including lifetimes.
#![feature(get_mut_unchecked)]
use std::sync::Arc;
let x: Arc<&str> = Arc::new("Hello, world!");
{
let s = String::from("Oh, no!");
let mut y: Arc<&str> = x.clone().into();
unsafe {
// this is Undefined Behavior, because x's inner type
// is &'long str, not &'short str
*Arc::get_mut_unchecked(&mut y) = &s;
}
}
println!("{}", &*x); // Use-after-free
source§impl<A> Arc<dyn Any + Send + Sync, A>where
A: Allocator,
impl<A> Arc<dyn Any + Send + Sync, A>where
A: Allocator,
1.29.0 · sourcepub fn downcast<T>(self) -> Result<Arc<T, A>, Arc<dyn Any + Send + Sync, A>>
pub fn downcast<T>(self) -> Result<Arc<T, A>, Arc<dyn Any + Send + Sync, A>>
Attempts to downcast the Arc<dyn Any + Send + Sync>
to a concrete type.
§Examples
use std::any::Any;
use std::sync::Arc;
fn print_if_string(value: Arc<dyn Any + Send + Sync>) {
if let Ok(string) = value.downcast::<String>() {
println!("String ({}): {}", string.len(), string);
}
}
let my_string = "Hello World".to_string();
print_if_string(Arc::new(my_string));
print_if_string(Arc::new(0i8));
sourcepub unsafe fn downcast_unchecked<T>(self) -> Arc<T, A>
🔬This is a nightly-only experimental API. (downcast_unchecked
)
pub unsafe fn downcast_unchecked<T>(self) -> Arc<T, A>
downcast_unchecked
)Downcasts the Arc<dyn Any + Send + Sync>
to a concrete type.
For a safe alternative see downcast
.
§Examples
#![feature(downcast_unchecked)]
use std::any::Any;
use std::sync::Arc;
let x: Arc<dyn Any + Send + Sync> = Arc::new(1_usize);
unsafe {
assert_eq!(*x.downcast_unchecked::<usize>(), 1);
}
§Safety
The contained value must be of type T
. Calling this method
with the incorrect type is undefined behavior.
Trait Implementations§
source§impl<T> AnyProvider for Arc<T>where
T: AnyProvider + ?Sized,
impl<T> AnyProvider for Arc<T>where
T: AnyProvider + ?Sized,
source§fn load_any(
&self,
key: DataKey,
req: DataRequest<'_>,
) -> Result<AnyResponse, DataError>
fn load_any( &self, key: DataKey, req: DataRequest<'_>, ) -> Result<AnyResponse, DataError>
AnyPayload
according to the key and request.source§impl<T> AsFd for Arc<T>
impl<T> AsFd for Arc<T>
This impl allows implementing traits that require AsFd
on Arc.
use std::net::UdpSocket;
use std::sync::Arc;
trait MyTrait: AsFd {}
impl MyTrait for Arc<UdpSocket> {}
impl MyTrait for Box<UdpSocket> {}
source§fn as_fd(&self) -> BorrowedFd<'_>
fn as_fd(&self) -> BorrowedFd<'_>
source§impl<T> AsRawFd for Arc<T>where
T: AsRawFd,
impl<T> AsRawFd for Arc<T>where
T: AsRawFd,
This impl allows implementing traits that require AsRawFd
on Arc.
use std::net::UdpSocket;
use std::sync::Arc;
trait MyTrait: AsRawFd {
}
impl MyTrait for Arc<UdpSocket> {}
impl MyTrait for Box<UdpSocket> {}
source§impl<M, P> BoundDataProvider<M> for Arc<P>
impl<M, P> BoundDataProvider<M> for Arc<P>
source§fn load_bound(&self, req: DataRequest<'_>) -> Result<DataResponse<M>, DataError>
fn load_bound(&self, req: DataRequest<'_>) -> Result<DataResponse<M>, DataError>
source§impl<T> BufferProvider for Arc<T>where
T: BufferProvider + ?Sized,
impl<T> BufferProvider for Arc<T>where
T: BufferProvider + ?Sized,
source§fn load_buffer(
&self,
key: DataKey,
req: DataRequest<'_>,
) -> Result<DataResponse<BufferMarker>, DataError>
fn load_buffer( &self, key: DataKey, req: DataRequest<'_>, ) -> Result<DataResponse<BufferMarker>, DataError>
source§impl<T, A> Clone for Arc<T, A>
impl<T, A> Clone for Arc<T, A>
source§fn clone(&self) -> Arc<T, A>
fn clone(&self) -> Arc<T, A>
Makes a clone of the Arc
pointer.
This creates another pointer to the same allocation, increasing the strong reference count.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
let _ = Arc::clone(&five);
source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl<M, P> DataProvider<M> for Arc<P>
impl<M, P> DataProvider<M> for Arc<P>
source§fn load(&self, req: DataRequest<'_>) -> Result<DataResponse<M>, DataError>
fn load(&self, req: DataRequest<'_>) -> Result<DataResponse<M>, DataError>
source§impl<T, A> Drop for Arc<T, A>
impl<T, A> Drop for Arc<T, A>
source§fn drop(&mut self)
fn drop(&mut self)
Drops the Arc
.
This will decrement the strong reference count. If the strong reference
count reaches zero then the only other references (if any) are
Weak
, so we drop
the inner value.
§Examples
use std::sync::Arc;
struct Foo;
impl Drop for Foo {
fn drop(&mut self) {
println!("dropped!");
}
}
let foo = Arc::new(Foo);
let foo2 = Arc::clone(&foo);
drop(foo); // Doesn't print anything
drop(foo2); // Prints "dropped!"
source§impl<M, P> DynamicDataProvider<M> for Arc<P>
impl<M, P> DynamicDataProvider<M> for Arc<P>
source§fn load_data(
&self,
key: DataKey,
req: DataRequest<'_>,
) -> Result<DataResponse<M>, DataError>
fn load_data( &self, key: DataKey, req: DataRequest<'_>, ) -> Result<DataResponse<M>, DataError>
source§impl<T> Error for Arc<T>
impl<T> Error for Arc<T>
source§fn description(&self) -> &str
fn description(&self) -> &str
source§fn cause(&self) -> Option<&dyn Error>
fn cause(&self) -> Option<&dyn Error>
source§impl<T> FromIterator<T> for Arc<[T]>
impl<T> FromIterator<T> for Arc<[T]>
source§fn from_iter<I>(iter: I) -> Arc<[T]>where
I: IntoIterator<Item = T>,
fn from_iter<I>(iter: I) -> Arc<[T]>where
I: IntoIterator<Item = T>,
Takes each element in the Iterator
and collects it into an Arc<[T]>
.
§Performance characteristics
§The general case
In the general case, collecting into Arc<[T]>
is done by first
collecting into a Vec<T>
. That is, when writing the following:
let evens: Arc<[u8]> = (0..10).filter(|&x| x % 2 == 0).collect();
this behaves as if we wrote:
let evens: Arc<[u8]> = (0..10).filter(|&x| x % 2 == 0)
.collect::<Vec<_>>() // The first set of allocations happens here.
.into(); // A second allocation for `Arc<[T]>` happens here.
This will allocate as many times as needed for constructing the Vec<T>
and then it will allocate once for turning the Vec<T>
into the Arc<[T]>
.
§Iterators of known length
When your Iterator
implements TrustedLen
and is of an exact size,
a single allocation will be made for the Arc<[T]>
. For example:
let evens: Arc<[u8]> = (0..10).collect(); // Just a single allocation happens here.
source§impl<T, CTX> HashStable<CTX> for Arc<T>where
T: HashStable<CTX> + ?Sized,
impl<T, CTX> HashStable<CTX> for Arc<T>where
T: HashStable<CTX> + ?Sized,
fn hash_stable(&self, ctx: &mut CTX, hasher: &mut StableHasher<SipHasher128>)
source§impl<T, A> Ord for Arc<T, A>
impl<T, A> Ord for Arc<T, A>
source§fn cmp(&self, other: &Arc<T, A>) -> Ordering
fn cmp(&self, other: &Arc<T, A>) -> Ordering
Comparison for two Arc
s.
The two are compared by calling cmp()
on their inner values.
§Examples
use std::sync::Arc;
use std::cmp::Ordering;
let five = Arc::new(5);
assert_eq!(Ordering::Less, five.cmp(&Arc::new(6)));
source§fn max(self, other: Self) -> Selfwhere
Self: Sized,
fn max(self, other: Self) -> Selfwhere
Self: Sized,
source§impl<T, A> PartialEq for Arc<T, A>
impl<T, A> PartialEq for Arc<T, A>
source§fn eq(&self, other: &Arc<T, A>) -> bool
fn eq(&self, other: &Arc<T, A>) -> bool
Equality for two Arc
s.
Two Arc
s are equal if their inner values are equal, even if they are
stored in different allocation.
If T
also implements Eq
(implying reflexivity of equality),
two Arc
s that point to the same allocation are always equal.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
assert!(five == Arc::new(5));
source§fn ne(&self, other: &Arc<T, A>) -> bool
fn ne(&self, other: &Arc<T, A>) -> bool
Inequality for two Arc
s.
Two Arc
s are not equal if their inner values are not equal.
If T
also implements Eq
(implying reflexivity of equality),
two Arc
s that point to the same value are always equal.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
assert!(five != Arc::new(6));
source§impl<T, A> PartialOrd for Arc<T, A>
impl<T, A> PartialOrd for Arc<T, A>
source§fn partial_cmp(&self, other: &Arc<T, A>) -> Option<Ordering>
fn partial_cmp(&self, other: &Arc<T, A>) -> Option<Ordering>
Partial comparison for two Arc
s.
The two are compared by calling partial_cmp()
on their inner values.
§Examples
use std::sync::Arc;
use std::cmp::Ordering;
let five = Arc::new(5);
assert_eq!(Some(Ordering::Less), five.partial_cmp(&Arc::new(6)));
source§fn lt(&self, other: &Arc<T, A>) -> bool
fn lt(&self, other: &Arc<T, A>) -> bool
Less-than comparison for two Arc
s.
The two are compared by calling <
on their inner values.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
assert!(five < Arc::new(6));
source§fn le(&self, other: &Arc<T, A>) -> bool
fn le(&self, other: &Arc<T, A>) -> bool
‘Less than or equal to’ comparison for two Arc
s.
The two are compared by calling <=
on their inner values.
§Examples
use std::sync::Arc;
let five = Arc::new(5);
assert!(five <= Arc::new(5));
source§impl<T> Pointer for Arc<T>
impl<T> Pointer for Arc<T>
source§impl Read for Arc<File>
impl Read for Arc<File>
source§fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>
fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error>
source§fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>
read
, except that it reads into a slice of buffers. Read moresource§fn read_buf(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
fn read_buf(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
read_buf
)source§fn is_read_vectored(&self) -> bool
fn is_read_vectored(&self) -> bool
can_vector
)source§fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error>
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error>
buf
. Read moresource§fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>
fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>
buf
. Read moresource§fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>
fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>
buf
. Read moresource§fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
read_buf
)cursor
. Read moresource§fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Read
. Read moresource§impl Seek for Arc<File>
impl Seek for Arc<File>
source§fn seek(&mut self, pos: SeekFrom) -> Result<u64, Error>
fn seek(&mut self, pos: SeekFrom) -> Result<u64, Error>
source§fn stream_len(&mut self) -> Result<u64, Error>
fn stream_len(&mut self) -> Result<u64, Error>
seek_stream_len
)source§impl<S> Subscriber for Arc<S>where
S: Subscriber + ?Sized,
impl<S> Subscriber for Arc<S>where
S: Subscriber + ?Sized,
source§fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest
fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest
source§fn max_level_hint(&self) -> Option<LevelFilter>
fn max_level_hint(&self) -> Option<LevelFilter>
Subscriber
will
enable, or None
, if the subscriber does not implement level-based
filtering or chooses not to implement this method. Read moresource§fn new_span(&self, span: &Attributes<'_>) -> Id
fn new_span(&self, span: &Attributes<'_>) -> Id
source§fn record_follows_from(&self, span: &Id, follows: &Id)
fn record_follows_from(&self, span: &Id, follows: &Id)
source§fn event_enabled(&self, event: &Event<'_>) -> bool
fn event_enabled(&self, event: &Event<'_>) -> bool
source§fn clone_span(&self, id: &Id) -> Id
fn clone_span(&self, id: &Id) -> Id
source§fn drop_span(&self, id: Id)
fn drop_span(&self, id: Id)
Subscriber::try_close
insteadsource§fn current_span(&self) -> Current
fn current_span(&self) -> Current
source§impl<I, T> TypeFoldable<I> for Arc<T>where
I: Interner,
T: TypeFoldable<I>,
impl<I, T> TypeFoldable<I> for Arc<T>where
I: Interner,
T: TypeFoldable<I>,
source§fn try_fold_with<F>(
self,
folder: &mut F,
) -> Result<Arc<T>, <F as FallibleTypeFolder<I>>::Error>where
F: FallibleTypeFolder<I>,
fn try_fold_with<F>(
self,
folder: &mut F,
) -> Result<Arc<T>, <F as FallibleTypeFolder<I>>::Error>where
F: FallibleTypeFolder<I>,
source§fn fold_with<F>(self, folder: &mut F) -> Selfwhere
F: TypeFolder<I>,
fn fold_with<F>(self, folder: &mut F) -> Selfwhere
F: TypeFolder<I>,
try_fold_with
for use with infallible
folders. Do not override this method, to ensure coherence with
try_fold_with
.source§impl<I, T> TypeVisitable<I> for Arc<T>where
I: Interner,
T: TypeVisitable<I>,
impl<I, T> TypeVisitable<I> for Arc<T>where
I: Interner,
T: TypeVisitable<I>,
source§fn visit_with<V>(&self, visitor: &mut V) -> <V as TypeVisitor<I>>::Resultwhere
V: TypeVisitor<I>,
fn visit_with<V>(&self, visitor: &mut V) -> <V as TypeVisitor<I>>::Resultwhere
V: TypeVisitor<I>,
source§impl Write for Arc<File>
impl Write for Arc<File>
source§fn write(&mut self, buf: &[u8]) -> Result<usize, Error>
fn write(&mut self, buf: &[u8]) -> Result<usize, Error>
source§fn is_write_vectored(&self) -> bool
fn is_write_vectored(&self) -> bool
can_vector
)source§fn flush(&mut self) -> Result<(), Error>
fn flush(&mut self) -> Result<(), Error>
source§fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
source§fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
write_all_vectored
)source§impl<'a, T> Writeable for Arc<T>
impl<'a, T> Writeable for Arc<T>
source§fn write_to<W>(&self, sink: &mut W) -> Result<(), Error>
fn write_to<W>(&self, sink: &mut W) -> Result<(), Error>
write_to_parts
, and discards any
Part
annotations.source§fn write_to_parts<W>(&self, sink: &mut W) -> Result<(), Error>where
W: PartsWrite + ?Sized,
fn write_to_parts<W>(&self, sink: &mut W) -> Result<(), Error>where
W: PartsWrite + ?Sized,
Part
annotations to the given sink. Errors from the
sink are bubbled up. The default implementation delegates to write_to
,
and doesn’t produce any Part
annotations.source§fn writeable_length_hint(&self) -> LengthHint
fn writeable_length_hint(&self) -> LengthHint
impl<T> CartablePointerLike for Arc<T>
impl<T> CloneStableDeref for Arc<T>where
T: ?Sized,
impl<T> CloneableCart for Arc<T>where
T: ?Sized,
impl<T> CloneableCartablePointerLike for Arc<T>
impl<T, U, A> CoerceUnsized<Arc<U, A>> for Arc<T, A>
impl<T, A> DerefPure for Arc<T, A>
impl<T, U> DispatchFromDyn<Arc<U>> for Arc<T>
impl<T> DynSend for Arc<T>
impl<T> DynSync for Arc<T>
impl<T, A> Eq for Arc<T, A>
impl<T, A> PinCoerceUnsized for Arc<T, A>
impl<T, A> Send for Arc<T, A>
impl<T> StableDeref for Arc<T>where
T: ?Sized,
impl<T, A> Sync for Arc<T, A>
impl<T, A> Unpin for Arc<T, A>
impl<T, A> UnwindSafe for Arc<T, A>
Auto Trait Implementations§
impl<T, A = Global> !DynSend for Arc<T, A>
impl<T, A = Global> !DynSync for Arc<T, A>
impl<T, A> Freeze for Arc<T, A>
impl<T, A> RefUnwindSafe for Arc<T, A>
Blanket Implementations§
source§impl<P> AsDowncastingAnyProvider for Pwhere
P: AnyProvider + ?Sized,
impl<P> AsDowncastingAnyProvider for Pwhere
P: AnyProvider + ?Sized,
source§fn as_downcasting(&self) -> DowncastingAnyProvider<'_, P>
fn as_downcasting(&self) -> DowncastingAnyProvider<'_, P>
DynamicDataProvider<M>
when called on AnyProvider
source§impl<P> AsDynamicDataProviderAnyMarkerWrap for P
impl<P> AsDynamicDataProviderAnyMarkerWrap for P
source§fn as_any_provider(&self) -> DynamicDataProviderAnyMarkerWrap<'_, P>
fn as_any_provider(&self) -> DynamicDataProviderAnyMarkerWrap<'_, P>
AnyProvider
when called on DynamicDataProvider<AnyMarker>
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T, R> CollectAndApply<T, R> for T
impl<T, R> CollectAndApply<T, R> for T
source§impl<Q, K> Comparable<K> for Q
impl<Q, K> Comparable<K> for Q
source§impl<Tcx, T> DepNodeParams<Tcx> for T
impl<Tcx, T> DepNodeParams<Tcx> for T
default fn fingerprint_style() -> FingerprintStyle
source§default fn to_fingerprint(&self, tcx: Tcx) -> Fingerprint
default fn to_fingerprint(&self, tcx: Tcx) -> Fingerprint
default fn to_debug_str(&self, _: Tcx) -> String
source§default fn recover(_: Tcx, _: &DepNode) -> Option<T>
default fn recover(_: Tcx, _: &DepNode) -> Option<T>
DepNode
,
something which is needed when forcing DepNode
s during red-green
evaluation. The query system will only call this method if
fingerprint_style()
is not FingerprintStyle::Opaque
.
It is always valid to return None
here, in which case incremental
compilation will treat the query as having changed instead of forcing it.source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.source§impl<T> Filterable for T
impl<T> Filterable for T
source§fn filterable(
self,
filter_name: &'static str,
) -> RequestFilterDataProvider<T, fn(_: DataRequest<'_>) -> bool>
fn filterable( self, filter_name: &'static str, ) -> RequestFilterDataProvider<T, fn(_: DataRequest<'_>) -> bool>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§impl<P> IntoQueryParam<P> for P
impl<P> IntoQueryParam<P> for P
fn into_query_param(self) -> P
source§impl<'tcx, T> IsSuggestable<'tcx> for T
impl<'tcx, T> IsSuggestable<'tcx> for T
source§impl<T> MaybeResult<T> for T
impl<T> MaybeResult<T> for T
source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<T> ReadCacheOps for T
impl<T> ReadCacheOps for T
source§impl<I, T> TypeVisitableExt<I> for Twhere
I: Interner,
T: TypeVisitable<I>,
impl<I, T> TypeVisitableExt<I> for Twhere
I: Interner,
T: TypeVisitable<I>,
fn has_type_flags(&self, flags: TypeFlags) -> bool
source§fn has_vars_bound_at_or_above(&self, binder: DebruijnIndex) -> bool
fn has_vars_bound_at_or_above(&self, binder: DebruijnIndex) -> bool
true
if self
has any late-bound regions that are either
bound by binder
or bound by some binder outside of binder
.
If binder
is ty::INNERMOST
, this indicates whether
there are any late-bound regions that appear free.fn error_reported(&self) -> Result<(), <I as Interner>::ErrorGuaranteed>
source§fn has_vars_bound_above(&self, binder: DebruijnIndex) -> bool
fn has_vars_bound_above(&self, binder: DebruijnIndex) -> bool
true
if this type has any regions that escape binder
(and
hence are not bound by it).source§fn has_escaping_bound_vars(&self) -> bool
fn has_escaping_bound_vars(&self) -> bool
true
if this type has regions that are not a part of the type.
For example, for<'a> fn(&'a i32)
return false
, while fn(&'a i32)
would return true
. The latter can occur when traversing through the
former. Read morefn has_aliases(&self) -> bool
fn has_opaque_types(&self) -> bool
fn has_coroutines(&self) -> bool
fn references_error(&self) -> bool
fn has_non_region_param(&self) -> bool
fn has_infer_regions(&self) -> bool
fn has_infer_types(&self) -> bool
fn has_non_region_infer(&self) -> bool
fn has_infer(&self) -> bool
fn has_placeholders(&self) -> bool
fn has_non_region_placeholders(&self) -> bool
fn has_param(&self) -> bool
source§fn has_free_regions(&self) -> bool
fn has_free_regions(&self) -> bool
fn has_erased_regions(&self) -> bool
source§fn has_erasable_regions(&self) -> bool
fn has_erasable_regions(&self) -> bool
source§fn is_global(&self) -> bool
fn is_global(&self) -> bool
source§fn has_bound_regions(&self) -> bool
fn has_bound_regions(&self) -> bool
source§fn has_non_region_bound_vars(&self) -> bool
fn has_non_region_bound_vars(&self) -> bool
source§fn has_bound_vars(&self) -> bool
fn has_bound_vars(&self) -> bool
source§fn still_further_specializable(&self) -> bool
fn still_further_specializable(&self) -> bool
impl
specialization.source§impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
source§impl<I, T> UpcastFrom<I, T> for T
impl<I, T> UpcastFrom<I, T> for T
fn upcast_from(from: T, _tcx: I) -> T
source§impl<Tcx, T> Value<Tcx> for Twhere
Tcx: DepContext,
impl<Tcx, T> Value<Tcx> for Twhere
Tcx: DepContext,
default fn from_cycle_error( tcx: Tcx, cycle_error: &CycleError, _guar: ErrorGuaranteed, ) -> T
source§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
source§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
source§fn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
impl<'a, T> Captures<'a> for Twhere
T: ?Sized,
impl<T> ErasedDestructor for Twhere
T: 'static,
impl<T> MaybeSendSync for T
Layout§
Note: Unable to compute type layout, possibly due to this type having generic parameters. Layout can only be computed for concrete, fully-instantiated types.