std/thread/mod.rs
1//! Native threads.
2//!
3//! ## The threading model
4//!
5//! An executing Rust program consists of a collection of native OS threads,
6//! each with their own stack and local state. Threads can be named, and
7//! provide some built-in support for low-level synchronization.
8//!
9//! Communication between threads can be done through
10//! [channels], Rust's message-passing types, along with [other forms of thread
11//! synchronization](../../std/sync/index.html) and shared-memory data
12//! structures. In particular, types that are guaranteed to be
13//! threadsafe are easily shared between threads using the
14//! atomically-reference-counted container, [`Arc`].
15//!
16//! Fatal logic errors in Rust cause *thread panic*, during which
17//! a thread will unwind the stack, running destructors and freeing
18//! owned resources. While not meant as a 'try/catch' mechanism, panics
19//! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with
20//! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered
21//! from, or alternatively be resumed with
22//! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic
23//! is not caught the thread will exit, but the panic may optionally be
24//! detected from a different thread with [`join`]. If the main thread panics
25//! without the panic being caught, the application will exit with a
26//! non-zero exit code.
27//!
28//! When the main thread of a Rust program terminates, the entire program shuts
29//! down, even if other threads are still running. However, this module provides
30//! convenient facilities for automatically waiting for the termination of a
31//! thread (i.e., join).
32//!
33//! ## Spawning a thread
34//!
35//! A new thread can be spawned using the [`thread::spawn`][`spawn`] function:
36//!
37//! ```rust
38//! use std::thread;
39//!
40//! thread::spawn(move || {
41//! // some work here
42//! });
43//! ```
44//!
45//! In this example, the spawned thread is "detached," which means that there is
46//! no way for the program to learn when the spawned thread completes or otherwise
47//! terminates.
48//!
49//! To learn when a thread completes, it is necessary to capture the [`JoinHandle`]
50//! object that is returned by the call to [`spawn`], which provides
51//! a `join` method that allows the caller to wait for the completion of the
52//! spawned thread:
53//!
54//! ```rust
55//! use std::thread;
56//!
57//! let thread_join_handle = thread::spawn(move || {
58//! // some work here
59//! });
60//! // some work here
61//! let res = thread_join_handle.join();
62//! ```
63//!
64//! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final
65//! value produced by the spawned thread, or [`Err`] of the value given to
66//! a call to [`panic!`] if the thread panicked.
67//!
68//! Note that there is no parent/child relationship between a thread that spawns a
69//! new thread and the thread being spawned. In particular, the spawned thread may or
70//! may not outlive the spawning thread, unless the spawning thread is the main thread.
71//!
72//! ## Configuring threads
73//!
74//! A new thread can be configured before it is spawned via the [`Builder`] type,
75//! which currently allows you to set the name and stack size for the thread:
76//!
77//! ```rust
78//! # #![allow(unused_must_use)]
79//! use std::thread;
80//!
81//! thread::Builder::new().name("thread1".to_string()).spawn(move || {
82//! println!("Hello, world!");
83//! });
84//! ```
85//!
86//! ## The `Thread` type
87//!
88//! Threads are represented via the [`Thread`] type, which you can get in one of
89//! two ways:
90//!
91//! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
92//! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`].
93//! * By requesting the current thread, using the [`thread::current`] function.
94//!
95//! The [`thread::current`] function is available even for threads not spawned
96//! by the APIs of this module.
97//!
98//! ## Thread-local storage
99//!
100//! This module also provides an implementation of thread-local storage for Rust
101//! programs. Thread-local storage is a method of storing data into a global
102//! variable that each thread in the program will have its own copy of.
103//! Threads do not share this data, so accesses do not need to be synchronized.
104//!
105//! A thread-local key owns the value it contains and will destroy the value when the
106//! thread exits. It is created with the [`thread_local!`] macro and can contain any
107//! value that is `'static` (no borrowed pointers). It provides an accessor function,
108//! [`with`], that yields a shared reference to the value to the specified
109//! closure. Thread-local keys allow only shared access to values, as there would be no
110//! way to guarantee uniqueness if mutable borrows were allowed. Most values
111//! will want to make use of some form of **interior mutability** through the
112//! [`Cell`] or [`RefCell`] types.
113//!
114//! ## Naming threads
115//!
116//! Threads are able to have associated names for identification purposes. By default, spawned
117//! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass
118//! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the
119//! thread, use [`Thread::name`]. A couple of examples where the name of a thread gets used:
120//!
121//! * If a panic occurs in a named thread, the thread name will be printed in the panic message.
122//! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in
123//! unix-like platforms).
124//!
125//! ## Stack size
126//!
127//! The default stack size is platform-dependent and subject to change.
128//! Currently, it is 2 MiB on all Tier-1 platforms.
129//!
130//! There are two ways to manually specify the stack size for spawned threads:
131//!
132//! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`].
133//! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack
134//! size (in bytes). Note that setting [`Builder::stack_size`] will override this. Be aware that
135//! changes to `RUST_MIN_STACK` may be ignored after program start.
136//!
137//! Note that the stack size of the main thread is *not* determined by Rust.
138//!
139//! [channels]: crate::sync::mpsc
140//! [`join`]: JoinHandle::join
141//! [`Result`]: crate::result::Result
142//! [`Ok`]: crate::result::Result::Ok
143//! [`Err`]: crate::result::Result::Err
144//! [`thread::current`]: current::current
145//! [`thread::Result`]: Result
146//! [`unpark`]: Thread::unpark
147//! [`thread::park_timeout`]: park_timeout
148//! [`Cell`]: crate::cell::Cell
149//! [`RefCell`]: crate::cell::RefCell
150//! [`with`]: LocalKey::with
151//! [`thread_local!`]: crate::thread_local
152
153#![stable(feature = "rust1", since = "1.0.0")]
154#![deny(unsafe_op_in_unsafe_fn)]
155// Under `test`, `__FastLocalKeyInner` seems unused.
156#![cfg_attr(test, allow(dead_code))]
157
158#[cfg(all(test, not(any(target_os = "emscripten", target_os = "wasi"))))]
159mod tests;
160
161use crate::any::Any;
162use crate::cell::UnsafeCell;
163use crate::ffi::CStr;
164use crate::marker::PhantomData;
165use crate::mem::{self, ManuallyDrop, forget};
166use crate::num::NonZero;
167use crate::pin::Pin;
168use crate::sync::Arc;
169use crate::sync::atomic::{Atomic, AtomicUsize, Ordering};
170use crate::sys::sync::Parker;
171use crate::sys::thread as imp;
172use crate::sys_common::{AsInner, IntoInner};
173use crate::time::{Duration, Instant};
174use crate::{env, fmt, io, panic, panicking, str};
175
176#[stable(feature = "scoped_threads", since = "1.63.0")]
177mod scoped;
178
179#[stable(feature = "scoped_threads", since = "1.63.0")]
180pub use scoped::{Scope, ScopedJoinHandle, scope};
181
182mod current;
183
184#[stable(feature = "rust1", since = "1.0.0")]
185pub use current::current;
186#[unstable(feature = "current_thread_id", issue = "147194")]
187pub use current::current_id;
188pub(crate) use current::{current_or_unnamed, current_os_id, drop_current};
189use current::{set_current, try_with_current};
190
191mod spawnhook;
192
193#[unstable(feature = "thread_spawn_hook", issue = "132951")]
194pub use spawnhook::add_spawn_hook;
195
196////////////////////////////////////////////////////////////////////////////////
197// Thread-local storage
198////////////////////////////////////////////////////////////////////////////////
199
200#[macro_use]
201mod local;
202
203#[stable(feature = "rust1", since = "1.0.0")]
204pub use self::local::{AccessError, LocalKey};
205
206// Implementation details used by the thread_local!{} macro.
207#[doc(hidden)]
208#[unstable(feature = "thread_local_internals", issue = "none")]
209pub mod local_impl {
210 pub use super::local::thread_local_process_attrs;
211 pub use crate::sys::thread_local::*;
212}
213
214////////////////////////////////////////////////////////////////////////////////
215// Builder
216////////////////////////////////////////////////////////////////////////////////
217
218/// Thread factory, which can be used in order to configure the properties of
219/// a new thread.
220///
221/// Methods can be chained on it in order to configure it.
222///
223/// The two configurations available are:
224///
225/// - [`name`]: specifies an [associated name for the thread][naming-threads]
226/// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size]
227///
228/// The [`spawn`] method will take ownership of the builder and create an
229/// [`io::Result`] to the thread handle with the given configuration.
230///
231/// The [`thread::spawn`] free function uses a `Builder` with default
232/// configuration and [`unwrap`]s its return value.
233///
234/// You may want to use [`spawn`] instead of [`thread::spawn`], when you want
235/// to recover from a failure to launch a thread, indeed the free function will
236/// panic where the `Builder` method will return a [`io::Result`].
237///
238/// # Examples
239///
240/// ```
241/// use std::thread;
242///
243/// let builder = thread::Builder::new();
244///
245/// let handler = builder.spawn(|| {
246/// // thread code
247/// }).unwrap();
248///
249/// handler.join().unwrap();
250/// ```
251///
252/// [`stack_size`]: Builder::stack_size
253/// [`name`]: Builder::name
254/// [`spawn`]: Builder::spawn
255/// [`thread::spawn`]: spawn
256/// [`io::Result`]: crate::io::Result
257/// [`unwrap`]: crate::result::Result::unwrap
258/// [naming-threads]: ./index.html#naming-threads
259/// [stack-size]: ./index.html#stack-size
260#[must_use = "must eventually spawn the thread"]
261#[stable(feature = "rust1", since = "1.0.0")]
262#[derive(Debug)]
263pub struct Builder {
264 // A name for the thread-to-be, for identification in panic messages
265 name: Option<String>,
266 // The size of the stack for the spawned thread in bytes
267 stack_size: Option<usize>,
268 // Skip running and inheriting the thread spawn hooks
269 no_hooks: bool,
270}
271
272impl Builder {
273 /// Generates the base configuration for spawning a thread, from which
274 /// configuration methods can be chained.
275 ///
276 /// # Examples
277 ///
278 /// ```
279 /// use std::thread;
280 ///
281 /// let builder = thread::Builder::new()
282 /// .name("foo".into())
283 /// .stack_size(32 * 1024);
284 ///
285 /// let handler = builder.spawn(|| {
286 /// // thread code
287 /// }).unwrap();
288 ///
289 /// handler.join().unwrap();
290 /// ```
291 #[stable(feature = "rust1", since = "1.0.0")]
292 pub fn new() -> Builder {
293 Builder { name: None, stack_size: None, no_hooks: false }
294 }
295
296 /// Names the thread-to-be. Currently the name is used for identification
297 /// only in panic messages.
298 ///
299 /// The name must not contain null bytes (`\0`).
300 ///
301 /// For more information about named threads, see
302 /// [this module-level documentation][naming-threads].
303 ///
304 /// # Examples
305 ///
306 /// ```
307 /// use std::thread;
308 ///
309 /// let builder = thread::Builder::new()
310 /// .name("foo".into());
311 ///
312 /// let handler = builder.spawn(|| {
313 /// assert_eq!(thread::current().name(), Some("foo"))
314 /// }).unwrap();
315 ///
316 /// handler.join().unwrap();
317 /// ```
318 ///
319 /// [naming-threads]: ./index.html#naming-threads
320 #[stable(feature = "rust1", since = "1.0.0")]
321 pub fn name(mut self, name: String) -> Builder {
322 self.name = Some(name);
323 self
324 }
325
326 /// Sets the size of the stack (in bytes) for the new thread.
327 ///
328 /// The actual stack size may be greater than this value if
329 /// the platform specifies a minimal stack size.
330 ///
331 /// For more information about the stack size for threads, see
332 /// [this module-level documentation][stack-size].
333 ///
334 /// # Examples
335 ///
336 /// ```
337 /// use std::thread;
338 ///
339 /// let builder = thread::Builder::new().stack_size(32 * 1024);
340 /// ```
341 ///
342 /// [stack-size]: ./index.html#stack-size
343 #[stable(feature = "rust1", since = "1.0.0")]
344 pub fn stack_size(mut self, size: usize) -> Builder {
345 self.stack_size = Some(size);
346 self
347 }
348
349 /// Disables running and inheriting [spawn hooks](add_spawn_hook).
350 ///
351 /// Use this if the parent thread is in no way relevant for the child thread.
352 /// For example, when lazily spawning threads for a thread pool.
353 #[unstable(feature = "thread_spawn_hook", issue = "132951")]
354 pub fn no_hooks(mut self) -> Builder {
355 self.no_hooks = true;
356 self
357 }
358
359 /// Spawns a new thread by taking ownership of the `Builder`, and returns an
360 /// [`io::Result`] to its [`JoinHandle`].
361 ///
362 /// The spawned thread may outlive the caller (unless the caller thread
363 /// is the main thread; the whole process is terminated when the main
364 /// thread finishes). The join handle can be used to block on
365 /// termination of the spawned thread, including recovering its panics.
366 ///
367 /// For a more complete documentation see [`thread::spawn`][`spawn`].
368 ///
369 /// # Errors
370 ///
371 /// Unlike the [`spawn`] free function, this method yields an
372 /// [`io::Result`] to capture any failure to create the thread at
373 /// the OS level.
374 ///
375 /// [`io::Result`]: crate::io::Result
376 ///
377 /// # Panics
378 ///
379 /// Panics if a thread name was set and it contained null bytes.
380 ///
381 /// # Examples
382 ///
383 /// ```
384 /// use std::thread;
385 ///
386 /// let builder = thread::Builder::new();
387 ///
388 /// let handler = builder.spawn(|| {
389 /// // thread code
390 /// }).unwrap();
391 ///
392 /// handler.join().unwrap();
393 /// ```
394 #[stable(feature = "rust1", since = "1.0.0")]
395 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
396 pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>>
397 where
398 F: FnOnce() -> T,
399 F: Send + 'static,
400 T: Send + 'static,
401 {
402 unsafe { self.spawn_unchecked(f) }
403 }
404
405 /// Spawns a new thread without any lifetime restrictions by taking ownership
406 /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`].
407 ///
408 /// The spawned thread may outlive the caller (unless the caller thread
409 /// is the main thread; the whole process is terminated when the main
410 /// thread finishes). The join handle can be used to block on
411 /// termination of the spawned thread, including recovering its panics.
412 ///
413 /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`],
414 /// except for the relaxed lifetime bounds, which render it unsafe.
415 /// For a more complete documentation see [`thread::spawn`][`spawn`].
416 ///
417 /// # Errors
418 ///
419 /// Unlike the [`spawn`] free function, this method yields an
420 /// [`io::Result`] to capture any failure to create the thread at
421 /// the OS level.
422 ///
423 /// # Panics
424 ///
425 /// Panics if a thread name was set and it contained null bytes.
426 ///
427 /// # Safety
428 ///
429 /// The caller has to ensure that the spawned thread does not outlive any
430 /// references in the supplied thread closure and its return type.
431 /// This can be guaranteed in two ways:
432 ///
433 /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced
434 /// data is dropped
435 /// - use only types with `'static` lifetime bounds, i.e., those with no or only
436 /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`]
437 /// and [`thread::spawn`][`spawn`] enforce this property statically)
438 ///
439 /// # Examples
440 ///
441 /// ```
442 /// use std::thread;
443 ///
444 /// let builder = thread::Builder::new();
445 ///
446 /// let x = 1;
447 /// let thread_x = &x;
448 ///
449 /// let handler = unsafe {
450 /// builder.spawn_unchecked(move || {
451 /// println!("x = {}", *thread_x);
452 /// }).unwrap()
453 /// };
454 ///
455 /// // caller has to ensure `join()` is called, otherwise
456 /// // it is possible to access freed memory if `x` gets
457 /// // dropped before the thread closure is executed!
458 /// handler.join().unwrap();
459 /// ```
460 ///
461 /// [`io::Result`]: crate::io::Result
462 #[stable(feature = "thread_spawn_unchecked", since = "1.82.0")]
463 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
464 pub unsafe fn spawn_unchecked<F, T>(self, f: F) -> io::Result<JoinHandle<T>>
465 where
466 F: FnOnce() -> T,
467 F: Send,
468 T: Send,
469 {
470 Ok(JoinHandle(unsafe { self.spawn_unchecked_(f, None) }?))
471 }
472
473 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
474 unsafe fn spawn_unchecked_<'scope, F, T>(
475 self,
476 f: F,
477 scope_data: Option<Arc<scoped::ScopeData>>,
478 ) -> io::Result<JoinInner<'scope, T>>
479 where
480 F: FnOnce() -> T,
481 F: Send,
482 T: Send,
483 {
484 let Builder { name, stack_size, no_hooks } = self;
485
486 let stack_size = stack_size.unwrap_or_else(|| {
487 static MIN: Atomic<usize> = AtomicUsize::new(0);
488
489 match MIN.load(Ordering::Relaxed) {
490 0 => {}
491 n => return n - 1,
492 }
493
494 let amt = env::var_os("RUST_MIN_STACK")
495 .and_then(|s| s.to_str().and_then(|s| s.parse().ok()))
496 .unwrap_or(imp::DEFAULT_MIN_STACK_SIZE);
497
498 // 0 is our sentinel value, so ensure that we'll never see 0 after
499 // initialization has run
500 MIN.store(amt + 1, Ordering::Relaxed);
501 amt
502 });
503
504 let id = ThreadId::new();
505 let my_thread = Thread::new(id, name);
506
507 let hooks = if no_hooks {
508 spawnhook::ChildSpawnHooks::default()
509 } else {
510 spawnhook::run_spawn_hooks(&my_thread)
511 };
512
513 let their_thread = my_thread.clone();
514
515 let my_packet: Arc<Packet<'scope, T>> = Arc::new(Packet {
516 scope: scope_data,
517 result: UnsafeCell::new(None),
518 _marker: PhantomData,
519 });
520 let their_packet = my_packet.clone();
521
522 // Pass `f` in `MaybeUninit` because actually that closure might *run longer than the lifetime of `F`*.
523 // See <https://github.com/rust-lang/rust/issues/101983> for more details.
524 // To prevent leaks we use a wrapper that drops its contents.
525 #[repr(transparent)]
526 struct MaybeDangling<T>(mem::MaybeUninit<T>);
527 impl<T> MaybeDangling<T> {
528 fn new(x: T) -> Self {
529 MaybeDangling(mem::MaybeUninit::new(x))
530 }
531 fn into_inner(self) -> T {
532 // Make sure we don't drop.
533 let this = ManuallyDrop::new(self);
534 // SAFETY: we are always initialized.
535 unsafe { this.0.assume_init_read() }
536 }
537 }
538 impl<T> Drop for MaybeDangling<T> {
539 fn drop(&mut self) {
540 // SAFETY: we are always initialized.
541 unsafe { self.0.assume_init_drop() };
542 }
543 }
544
545 let f = MaybeDangling::new(f);
546 let main = move || {
547 if let Err(_thread) = set_current(their_thread.clone()) {
548 // Both the current thread handle and the ID should not be
549 // initialized yet. Since only the C runtime and some of our
550 // platform code run before this, this point shouldn't be
551 // reachable. Use an abort to save binary size (see #123356).
552 rtabort!("something here is badly broken!");
553 }
554
555 if let Some(name) = their_thread.cname() {
556 imp::set_name(name);
557 }
558
559 let f = f.into_inner();
560 let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
561 crate::sys::backtrace::__rust_begin_short_backtrace(|| hooks.run());
562 crate::sys::backtrace::__rust_begin_short_backtrace(f)
563 }));
564 // SAFETY: `their_packet` as been built just above and moved by the
565 // closure (it is an Arc<...>) and `my_packet` will be stored in the
566 // same `JoinInner` as this closure meaning the mutation will be
567 // safe (not modify it and affect a value far away).
568 unsafe { *their_packet.result.get() = Some(try_result) };
569 // Here `their_packet` gets dropped, and if this is the last `Arc` for that packet that
570 // will call `decrement_num_running_threads` and therefore signal that this thread is
571 // done.
572 drop(their_packet);
573 // Here, the lifetime `'scope` can end. `main` keeps running for a bit
574 // after that before returning itself.
575 };
576
577 if let Some(scope_data) = &my_packet.scope {
578 scope_data.increment_num_running_threads();
579 }
580
581 let main = Box::new(main);
582 // SAFETY: dynamic size and alignment of the Box remain the same. See below for why the
583 // lifetime change is justified.
584 let main =
585 unsafe { Box::from_raw(Box::into_raw(main) as *mut (dyn FnOnce() + Send + 'static)) };
586
587 Ok(JoinInner {
588 // SAFETY:
589 //
590 // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed
591 // through FFI or otherwise used with low-level threading primitives that have no
592 // notion of or way to enforce lifetimes.
593 //
594 // As mentioned in the `Safety` section of this function's documentation, the caller of
595 // this function needs to guarantee that the passed-in lifetime is sufficiently long
596 // for the lifetime of the thread.
597 //
598 // Similarly, the `sys` implementation must guarantee that no references to the closure
599 // exist after the thread has terminated, which is signaled by `Thread::join`
600 // returning.
601 native: unsafe { imp::Thread::new(stack_size, my_thread.name(), main)? },
602 thread: my_thread,
603 packet: my_packet,
604 })
605 }
606}
607
608////////////////////////////////////////////////////////////////////////////////
609// Free functions
610////////////////////////////////////////////////////////////////////////////////
611
612/// Spawns a new thread, returning a [`JoinHandle`] for it.
613///
614/// The join handle provides a [`join`] method that can be used to join the spawned
615/// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing
616/// the argument given to [`panic!`].
617///
618/// If the join handle is dropped, the spawned thread will implicitly be *detached*.
619/// In this case, the spawned thread may no longer be joined.
620/// (It is the responsibility of the program to either eventually join threads it
621/// creates or detach them; otherwise, a resource leak will result.)
622///
623/// This function creates a thread with the default parameters of [`Builder`].
624/// To specify the new thread's stack size or the name, use [`Builder::spawn`].
625///
626/// As you can see in the signature of `spawn` there are two constraints on
627/// both the closure given to `spawn` and its return value, let's explain them:
628///
629/// - The `'static` constraint means that the closure and its return value
630/// must have a lifetime of the whole program execution. The reason for this
631/// is that threads can outlive the lifetime they have been created in.
632///
633/// Indeed if the thread, and by extension its return value, can outlive their
634/// caller, we need to make sure that they will be valid afterwards, and since
635/// we *can't* know when it will return we need to have them valid as long as
636/// possible, that is until the end of the program, hence the `'static`
637/// lifetime.
638/// - The [`Send`] constraint is because the closure will need to be passed
639/// *by value* from the thread where it is spawned to the new thread. Its
640/// return value will need to be passed from the new thread to the thread
641/// where it is `join`ed.
642/// As a reminder, the [`Send`] marker trait expresses that it is safe to be
643/// passed from thread to thread. [`Sync`] expresses that it is safe to have a
644/// reference be passed from thread to thread.
645///
646/// # Panics
647///
648/// Panics if the OS fails to create a thread; use [`Builder::spawn`]
649/// to recover from such errors.
650///
651/// # Examples
652///
653/// Creating a thread.
654///
655/// ```
656/// use std::thread;
657///
658/// let handler = thread::spawn(|| {
659/// // thread code
660/// });
661///
662/// handler.join().unwrap();
663/// ```
664///
665/// As mentioned in the module documentation, threads are usually made to
666/// communicate using [`channels`], here is how it usually looks.
667///
668/// This example also shows how to use `move`, in order to give ownership
669/// of values to a thread.
670///
671/// ```
672/// use std::thread;
673/// use std::sync::mpsc::channel;
674///
675/// let (tx, rx) = channel();
676///
677/// let sender = thread::spawn(move || {
678/// tx.send("Hello, thread".to_owned())
679/// .expect("Unable to send on channel");
680/// });
681///
682/// let receiver = thread::spawn(move || {
683/// let value = rx.recv().expect("Unable to receive from channel");
684/// println!("{value}");
685/// });
686///
687/// sender.join().expect("The sender thread has panicked");
688/// receiver.join().expect("The receiver thread has panicked");
689/// ```
690///
691/// A thread can also return a value through its [`JoinHandle`], you can use
692/// this to make asynchronous computations (futures might be more appropriate
693/// though).
694///
695/// ```
696/// use std::thread;
697///
698/// let computation = thread::spawn(|| {
699/// // Some expensive computation.
700/// 42
701/// });
702///
703/// let result = computation.join().unwrap();
704/// println!("{result}");
705/// ```
706///
707/// # Notes
708///
709/// This function has the same minimal guarantee regarding "foreign" unwinding operations (e.g.
710/// an exception thrown from C++ code, or a `panic!` in Rust code compiled or linked with a
711/// different runtime) as [`catch_unwind`]; namely, if the thread created with `thread::spawn`
712/// unwinds all the way to the root with such an exception, one of two behaviors are possible,
713/// and it is unspecified which will occur:
714///
715/// * The process aborts.
716/// * The process does not abort, and [`join`] will return a `Result::Err`
717/// containing an opaque type.
718///
719/// [`catch_unwind`]: ../../std/panic/fn.catch_unwind.html
720/// [`channels`]: crate::sync::mpsc
721/// [`join`]: JoinHandle::join
722/// [`Err`]: crate::result::Result::Err
723#[stable(feature = "rust1", since = "1.0.0")]
724#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
725pub fn spawn<F, T>(f: F) -> JoinHandle<T>
726where
727 F: FnOnce() -> T,
728 F: Send + 'static,
729 T: Send + 'static,
730{
731 Builder::new().spawn(f).expect("failed to spawn thread")
732}
733
734/// Cooperatively gives up a timeslice to the OS scheduler.
735///
736/// This calls the underlying OS scheduler's yield primitive, signaling
737/// that the calling thread is willing to give up its remaining timeslice
738/// so that the OS may schedule other threads on the CPU.
739///
740/// A drawback of yielding in a loop is that if the OS does not have any
741/// other ready threads to run on the current CPU, the thread will effectively
742/// busy-wait, which wastes CPU time and energy.
743///
744/// Therefore, when waiting for events of interest, a programmer's first
745/// choice should be to use synchronization devices such as [`channel`]s,
746/// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are
747/// implemented in a blocking manner, giving up the CPU until the event
748/// of interest has occurred which avoids repeated yielding.
749///
750/// `yield_now` should thus be used only rarely, mostly in situations where
751/// repeated polling is required because there is no other suitable way to
752/// learn when an event of interest has occurred.
753///
754/// # Examples
755///
756/// ```
757/// use std::thread;
758///
759/// thread::yield_now();
760/// ```
761///
762/// [`channel`]: crate::sync::mpsc
763/// [`join`]: JoinHandle::join
764/// [`Condvar`]: crate::sync::Condvar
765/// [`Mutex`]: crate::sync::Mutex
766#[stable(feature = "rust1", since = "1.0.0")]
767pub fn yield_now() {
768 imp::yield_now()
769}
770
771/// Determines whether the current thread is unwinding because of panic.
772///
773/// A common use of this feature is to poison shared resources when writing
774/// unsafe code, by checking `panicking` when the `drop` is called.
775///
776/// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
777/// already poison themselves when a thread panics while holding the lock.
778///
779/// This can also be used in multithreaded applications, in order to send a
780/// message to other threads warning that a thread has panicked (e.g., for
781/// monitoring purposes).
782///
783/// # Examples
784///
785/// ```should_panic
786/// use std::thread;
787///
788/// struct SomeStruct;
789///
790/// impl Drop for SomeStruct {
791/// fn drop(&mut self) {
792/// if thread::panicking() {
793/// println!("dropped while unwinding");
794/// } else {
795/// println!("dropped while not unwinding");
796/// }
797/// }
798/// }
799///
800/// {
801/// print!("a: ");
802/// let a = SomeStruct;
803/// }
804///
805/// {
806/// print!("b: ");
807/// let b = SomeStruct;
808/// panic!()
809/// }
810/// ```
811///
812/// [Mutex]: crate::sync::Mutex
813#[inline]
814#[must_use]
815#[stable(feature = "rust1", since = "1.0.0")]
816pub fn panicking() -> bool {
817 panicking::panicking()
818}
819
820/// Uses [`sleep`].
821///
822/// Puts the current thread to sleep for at least the specified amount of time.
823///
824/// The thread may sleep longer than the duration specified due to scheduling
825/// specifics or platform-dependent functionality. It will never sleep less.
826///
827/// This function is blocking, and should not be used in `async` functions.
828///
829/// # Platform-specific behavior
830///
831/// On Unix platforms, the underlying syscall may be interrupted by a
832/// spurious wakeup or signal handler. To ensure the sleep occurs for at least
833/// the specified duration, this function may invoke that system call multiple
834/// times.
835///
836/// # Examples
837///
838/// ```no_run
839/// use std::thread;
840///
841/// // Let's sleep for 2 seconds:
842/// thread::sleep_ms(2000);
843/// ```
844#[stable(feature = "rust1", since = "1.0.0")]
845#[deprecated(since = "1.6.0", note = "replaced by `std::thread::sleep`")]
846pub fn sleep_ms(ms: u32) {
847 sleep(Duration::from_millis(ms as u64))
848}
849
850/// Puts the current thread to sleep for at least the specified amount of time.
851///
852/// The thread may sleep longer than the duration specified due to scheduling
853/// specifics or platform-dependent functionality. It will never sleep less.
854///
855/// This function is blocking, and should not be used in `async` functions.
856///
857/// # Platform-specific behavior
858///
859/// On Unix platforms, the underlying syscall may be interrupted by a
860/// spurious wakeup or signal handler. To ensure the sleep occurs for at least
861/// the specified duration, this function may invoke that system call multiple
862/// times.
863/// Platforms which do not support nanosecond precision for sleeping will
864/// have `dur` rounded up to the nearest granularity of time they can sleep for.
865///
866/// Currently, specifying a zero duration on Unix platforms returns immediately
867/// without invoking the underlying [`nanosleep`] syscall, whereas on Windows
868/// platforms the underlying [`Sleep`] syscall is always invoked.
869/// If the intention is to yield the current time-slice you may want to use
870/// [`yield_now`] instead.
871///
872/// [`nanosleep`]: https://linux.die.net/man/2/nanosleep
873/// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
874///
875/// # Examples
876///
877/// ```no_run
878/// use std::{thread, time};
879///
880/// let ten_millis = time::Duration::from_millis(10);
881/// let now = time::Instant::now();
882///
883/// thread::sleep(ten_millis);
884///
885/// assert!(now.elapsed() >= ten_millis);
886/// ```
887#[stable(feature = "thread_sleep", since = "1.4.0")]
888pub fn sleep(dur: Duration) {
889 imp::sleep(dur)
890}
891
892/// Puts the current thread to sleep until the specified deadline has passed.
893///
894/// The thread may still be asleep after the deadline specified due to
895/// scheduling specifics or platform-dependent functionality. It will never
896/// wake before.
897///
898/// This function is blocking, and should not be used in `async` functions.
899///
900/// # Platform-specific behavior
901///
902/// In most cases this function will call an OS specific function. Where that
903/// is not supported [`sleep`] is used. Those platforms are referred to as other
904/// in the table below.
905///
906/// # Underlying System calls
907///
908/// The following system calls are [currently] being used:
909///
910/// | Platform | System call |
911/// |-----------|----------------------------------------------------------------------|
912/// | Linux | [clock_nanosleep] (Monotonic clock) |
913/// | BSD except OpenBSD | [clock_nanosleep] (Monotonic Clock)] |
914/// | Android | [clock_nanosleep] (Monotonic Clock)] |
915/// | Solaris | [clock_nanosleep] (Monotonic Clock)] |
916/// | Illumos | [clock_nanosleep] (Monotonic Clock)] |
917/// | Dragonfly | [clock_nanosleep] (Monotonic Clock)] |
918/// | Hurd | [clock_nanosleep] (Monotonic Clock)] |
919/// | Fuchsia | [clock_nanosleep] (Monotonic Clock)] |
920/// | Vxworks | [clock_nanosleep] (Monotonic Clock)] |
921/// | Other | `sleep_until` uses [`sleep`] and does not issue a syscall itself |
922///
923/// [currently]: crate::io#platform-specific-behavior
924/// [clock_nanosleep]: https://linux.die.net/man/3/clock_nanosleep
925///
926/// **Disclaimer:** These system calls might change over time.
927///
928/// # Examples
929///
930/// A simple game loop that limits the game to 60 frames per second.
931///
932/// ```no_run
933/// #![feature(thread_sleep_until)]
934/// # use std::time::{Duration, Instant};
935/// # use std::thread;
936/// #
937/// # fn update() {}
938/// # fn render() {}
939/// #
940/// let max_fps = 60.0;
941/// let frame_time = Duration::from_secs_f32(1.0/max_fps);
942/// let mut next_frame = Instant::now();
943/// loop {
944/// thread::sleep_until(next_frame);
945/// next_frame += frame_time;
946/// update();
947/// render();
948/// }
949/// ```
950///
951/// A slow API we must not call too fast and which takes a few
952/// tries before succeeding. By using `sleep_until` the time the
953/// API call takes does not influence when we retry or when we give up
954///
955/// ```no_run
956/// #![feature(thread_sleep_until)]
957/// # use std::time::{Duration, Instant};
958/// # use std::thread;
959/// #
960/// # enum Status {
961/// # Ready(usize),
962/// # Waiting,
963/// # }
964/// # fn slow_web_api_call() -> Status { Status::Ready(42) }
965/// #
966/// # const MAX_DURATION: Duration = Duration::from_secs(10);
967/// #
968/// # fn try_api_call() -> Result<usize, ()> {
969/// let deadline = Instant::now() + MAX_DURATION;
970/// let delay = Duration::from_millis(250);
971/// let mut next_attempt = Instant::now();
972/// loop {
973/// if Instant::now() > deadline {
974/// break Err(());
975/// }
976/// if let Status::Ready(data) = slow_web_api_call() {
977/// break Ok(data);
978/// }
979///
980/// next_attempt = deadline.min(next_attempt + delay);
981/// thread::sleep_until(next_attempt);
982/// }
983/// # }
984/// # let _data = try_api_call();
985/// ```
986#[unstable(feature = "thread_sleep_until", issue = "113752")]
987pub fn sleep_until(deadline: Instant) {
988 imp::sleep_until(deadline)
989}
990
991/// Used to ensure that `park` and `park_timeout` do not unwind, as that can
992/// cause undefined behavior if not handled correctly (see #102398 for context).
993struct PanicGuard;
994
995impl Drop for PanicGuard {
996 fn drop(&mut self) {
997 rtabort!("an irrecoverable error occurred while synchronizing threads")
998 }
999}
1000
1001/// Blocks unless or until the current thread's token is made available.
1002///
1003/// A call to `park` does not guarantee that the thread will remain parked
1004/// forever, and callers should be prepared for this possibility. However,
1005/// it is guaranteed that this function will not panic (it may abort the
1006/// process if the implementation encounters some rare errors).
1007///
1008/// # `park` and `unpark`
1009///
1010/// Every thread is equipped with some basic low-level blocking support, via the
1011/// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
1012/// method. [`park`] blocks the current thread, which can then be resumed from
1013/// another thread by calling the [`unpark`] method on the blocked thread's
1014/// handle.
1015///
1016/// Conceptually, each [`Thread`] handle has an associated token, which is
1017/// initially not present:
1018///
1019/// * The [`thread::park`][`park`] function blocks the current thread unless or
1020/// until the token is available for its thread handle, at which point it
1021/// atomically consumes the token. It may also return *spuriously*, without
1022/// consuming the token. [`thread::park_timeout`] does the same, but allows
1023/// specifying a maximum time to block the thread for.
1024///
1025/// * The [`unpark`] method on a [`Thread`] atomically makes the token available
1026/// if it wasn't already. Because the token can be held by a thread even if it is currently not
1027/// parked, [`unpark`] followed by [`park`] will result in the second call returning immediately.
1028/// However, note that to rely on this guarantee, you need to make sure that your `unpark` happens
1029/// after all `park` that may be done by other data structures!
1030///
1031/// The API is typically used by acquiring a handle to the current thread, placing that handle in a
1032/// shared data structure so that other threads can find it, and then `park`ing in a loop. When some
1033/// desired condition is met, another thread calls [`unpark`] on the handle. The last bullet point
1034/// above guarantees that even if the `unpark` occurs before the thread is finished `park`ing, it
1035/// will be woken up properly.
1036///
1037/// Note that the coordination via the shared data structure is crucial: If you `unpark` a thread
1038/// without first establishing that it is about to be `park`ing within your code, that `unpark` may
1039/// get consumed by a *different* `park` in the same thread, leading to a deadlock. This also means
1040/// you must not call unknown code between setting up for parking and calling `park`; for instance,
1041/// if you invoke `println!`, that may itself call `park` and thus consume your `unpark` and cause a
1042/// deadlock.
1043///
1044/// The motivation for this design is twofold:
1045///
1046/// * It avoids the need to allocate mutexes and condvars when building new
1047/// synchronization primitives; the threads already provide basic
1048/// blocking/signaling.
1049///
1050/// * It can be implemented very efficiently on many platforms.
1051///
1052/// # Memory Ordering
1053///
1054/// Calls to `unpark` _synchronize-with_ calls to `park`, meaning that memory
1055/// operations performed before a call to `unpark` are made visible to the thread that
1056/// consumes the token and returns from `park`. Note that all `park` and `unpark`
1057/// operations for a given thread form a total order and _all_ prior `unpark` operations
1058/// synchronize-with `park`.
1059///
1060/// In atomic ordering terms, `unpark` performs a `Release` operation and `park`
1061/// performs the corresponding `Acquire` operation. Calls to `unpark` for the same
1062/// thread form a [release sequence].
1063///
1064/// Note that being unblocked does not imply a call was made to `unpark`, because
1065/// wakeups can also be spurious. For example, a valid, but inefficient,
1066/// implementation could have `park` and `unpark` return immediately without doing anything,
1067/// making *all* wakeups spurious.
1068///
1069/// # Examples
1070///
1071/// ```
1072/// use std::thread;
1073/// use std::sync::atomic::{Ordering, AtomicBool};
1074/// use std::time::Duration;
1075///
1076/// static QUEUED: AtomicBool = AtomicBool::new(false);
1077/// static FLAG: AtomicBool = AtomicBool::new(false);
1078///
1079/// let parked_thread = thread::spawn(move || {
1080/// println!("Thread spawned");
1081/// // Signal that we are going to `park`. Between this store and our `park`, there may
1082/// // be no other `park`, or else that `park` could consume our `unpark` token!
1083/// QUEUED.store(true, Ordering::Release);
1084/// // We want to wait until the flag is set. We *could* just spin, but using
1085/// // park/unpark is more efficient.
1086/// while !FLAG.load(Ordering::Acquire) {
1087/// // We can *not* use `println!` here since that could use thread parking internally.
1088/// thread::park();
1089/// // We *could* get here spuriously, i.e., way before the 10ms below are over!
1090/// // But that is no problem, we are in a loop until the flag is set anyway.
1091/// }
1092/// println!("Flag received");
1093/// });
1094///
1095/// // Let some time pass for the thread to be spawned.
1096/// thread::sleep(Duration::from_millis(10));
1097///
1098/// // Ensure the thread is about to park.
1099/// // This is crucial! It guarantees that the `unpark` below is not consumed
1100/// // by some other code in the parked thread (e.g. inside `println!`).
1101/// while !QUEUED.load(Ordering::Acquire) {
1102/// // Spinning is of course inefficient; in practice, this would more likely be
1103/// // a dequeue where we have no work to do if there's nobody queued.
1104/// std::hint::spin_loop();
1105/// }
1106///
1107/// // Set the flag, and let the thread wake up.
1108/// // There is no race condition here: if `unpark`
1109/// // happens first, `park` will return immediately.
1110/// // There is also no other `park` that could consume this token,
1111/// // since we waited until the other thread got queued.
1112/// // Hence there is no risk of a deadlock.
1113/// FLAG.store(true, Ordering::Release);
1114/// println!("Unpark the thread");
1115/// parked_thread.thread().unpark();
1116///
1117/// parked_thread.join().unwrap();
1118/// ```
1119///
1120/// [`unpark`]: Thread::unpark
1121/// [`thread::park_timeout`]: park_timeout
1122/// [release sequence]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release_sequence
1123#[stable(feature = "rust1", since = "1.0.0")]
1124pub fn park() {
1125 let guard = PanicGuard;
1126 // SAFETY: park_timeout is called on the parker owned by this thread.
1127 unsafe {
1128 current().park();
1129 }
1130 // No panic occurred, do not abort.
1131 forget(guard);
1132}
1133
1134/// Uses [`park_timeout`].
1135///
1136/// Blocks unless or until the current thread's token is made available or
1137/// the specified duration has been reached (may wake spuriously).
1138///
1139/// The semantics of this function are equivalent to [`park`] except
1140/// that the thread will be blocked for roughly no longer than `dur`. This
1141/// method should not be used for precise timing due to anomalies such as
1142/// preemption or platform differences that might not cause the maximum
1143/// amount of time waited to be precisely `ms` long.
1144///
1145/// See the [park documentation][`park`] for more detail.
1146#[stable(feature = "rust1", since = "1.0.0")]
1147#[deprecated(since = "1.6.0", note = "replaced by `std::thread::park_timeout`")]
1148pub fn park_timeout_ms(ms: u32) {
1149 park_timeout(Duration::from_millis(ms as u64))
1150}
1151
1152/// Blocks unless or until the current thread's token is made available or
1153/// the specified duration has been reached (may wake spuriously).
1154///
1155/// The semantics of this function are equivalent to [`park`][park] except
1156/// that the thread will be blocked for roughly no longer than `dur`. This
1157/// method should not be used for precise timing due to anomalies such as
1158/// preemption or platform differences that might not cause the maximum
1159/// amount of time waited to be precisely `dur` long.
1160///
1161/// See the [park documentation][park] for more details.
1162///
1163/// # Platform-specific behavior
1164///
1165/// Platforms which do not support nanosecond precision for sleeping will have
1166/// `dur` rounded up to the nearest granularity of time they can sleep for.
1167///
1168/// # Examples
1169///
1170/// Waiting for the complete expiration of the timeout:
1171///
1172/// ```rust,no_run
1173/// use std::thread::park_timeout;
1174/// use std::time::{Instant, Duration};
1175///
1176/// let timeout = Duration::from_secs(2);
1177/// let beginning_park = Instant::now();
1178///
1179/// let mut timeout_remaining = timeout;
1180/// loop {
1181/// park_timeout(timeout_remaining);
1182/// let elapsed = beginning_park.elapsed();
1183/// if elapsed >= timeout {
1184/// break;
1185/// }
1186/// println!("restarting park_timeout after {elapsed:?}");
1187/// timeout_remaining = timeout - elapsed;
1188/// }
1189/// ```
1190#[stable(feature = "park_timeout", since = "1.4.0")]
1191pub fn park_timeout(dur: Duration) {
1192 let guard = PanicGuard;
1193 // SAFETY: park_timeout is called on a handle owned by this thread.
1194 unsafe {
1195 current().park_timeout(dur);
1196 }
1197 // No panic occurred, do not abort.
1198 forget(guard);
1199}
1200
1201////////////////////////////////////////////////////////////////////////////////
1202// ThreadId
1203////////////////////////////////////////////////////////////////////////////////
1204
1205/// A unique identifier for a running thread.
1206///
1207/// A `ThreadId` is an opaque object that uniquely identifies each thread
1208/// created during the lifetime of a process. `ThreadId`s are guaranteed not to
1209/// be reused, even when a thread terminates. `ThreadId`s are under the control
1210/// of Rust's standard library and there may not be any relationship between
1211/// `ThreadId` and the underlying platform's notion of a thread identifier --
1212/// the two concepts cannot, therefore, be used interchangeably. A `ThreadId`
1213/// can be retrieved from the [`id`] method on a [`Thread`].
1214///
1215/// # Examples
1216///
1217/// ```
1218/// use std::thread;
1219///
1220/// let other_thread = thread::spawn(|| {
1221/// thread::current().id()
1222/// });
1223///
1224/// let other_thread_id = other_thread.join().unwrap();
1225/// assert!(thread::current().id() != other_thread_id);
1226/// ```
1227///
1228/// [`id`]: Thread::id
1229#[stable(feature = "thread_id", since = "1.19.0")]
1230#[derive(Eq, PartialEq, Clone, Copy, Hash, Debug)]
1231pub struct ThreadId(NonZero<u64>);
1232
1233impl ThreadId {
1234 // Generate a new unique thread ID.
1235 pub(crate) fn new() -> ThreadId {
1236 #[cold]
1237 fn exhausted() -> ! {
1238 panic!("failed to generate unique thread ID: bitspace exhausted")
1239 }
1240
1241 cfg_select! {
1242 target_has_atomic = "64" => {
1243 use crate::sync::atomic::{Atomic, AtomicU64};
1244
1245 static COUNTER: Atomic<u64> = AtomicU64::new(0);
1246
1247 let mut last = COUNTER.load(Ordering::Relaxed);
1248 loop {
1249 let Some(id) = last.checked_add(1) else {
1250 exhausted();
1251 };
1252
1253 match COUNTER.compare_exchange_weak(last, id, Ordering::Relaxed, Ordering::Relaxed) {
1254 Ok(_) => return ThreadId(NonZero::new(id).unwrap()),
1255 Err(id) => last = id,
1256 }
1257 }
1258 }
1259 _ => {
1260 use crate::sync::{Mutex, PoisonError};
1261
1262 static COUNTER: Mutex<u64> = Mutex::new(0);
1263
1264 let mut counter = COUNTER.lock().unwrap_or_else(PoisonError::into_inner);
1265 let Some(id) = counter.checked_add(1) else {
1266 // in case the panic handler ends up calling `ThreadId::new()`,
1267 // avoid reentrant lock acquire.
1268 drop(counter);
1269 exhausted();
1270 };
1271
1272 *counter = id;
1273 drop(counter);
1274 ThreadId(NonZero::new(id).unwrap())
1275 }
1276 }
1277 }
1278
1279 #[cfg(any(not(target_thread_local), target_has_atomic = "64"))]
1280 fn from_u64(v: u64) -> Option<ThreadId> {
1281 NonZero::new(v).map(ThreadId)
1282 }
1283
1284 /// This returns a numeric identifier for the thread identified by this
1285 /// `ThreadId`.
1286 ///
1287 /// As noted in the documentation for the type itself, it is essentially an
1288 /// opaque ID, but is guaranteed to be unique for each thread. The returned
1289 /// value is entirely opaque -- only equality testing is stable. Note that
1290 /// it is not guaranteed which values new threads will return, and this may
1291 /// change across Rust versions.
1292 #[must_use]
1293 #[unstable(feature = "thread_id_value", issue = "67939")]
1294 pub fn as_u64(&self) -> NonZero<u64> {
1295 self.0
1296 }
1297}
1298
1299////////////////////////////////////////////////////////////////////////////////
1300// Thread
1301////////////////////////////////////////////////////////////////////////////////
1302
1303// This module ensures private fields are kept private, which is necessary to enforce the safety requirements.
1304mod thread_name_string {
1305 use crate::ffi::{CStr, CString};
1306 use crate::str;
1307
1308 /// Like a `String` it's guaranteed UTF-8 and like a `CString` it's null terminated.
1309 pub(crate) struct ThreadNameString {
1310 inner: CString,
1311 }
1312
1313 impl From<String> for ThreadNameString {
1314 fn from(s: String) -> Self {
1315 Self {
1316 inner: CString::new(s).expect("thread name may not contain interior null bytes"),
1317 }
1318 }
1319 }
1320
1321 impl ThreadNameString {
1322 pub fn as_cstr(&self) -> &CStr {
1323 &self.inner
1324 }
1325
1326 pub fn as_str(&self) -> &str {
1327 // SAFETY: `ThreadNameString` is guaranteed to be UTF-8.
1328 unsafe { str::from_utf8_unchecked(self.inner.to_bytes()) }
1329 }
1330 }
1331}
1332
1333use thread_name_string::ThreadNameString;
1334
1335/// Store the ID of the main thread.
1336///
1337/// The thread handle for the main thread is created lazily, and this might even
1338/// happen pre-main. Since not every platform has a way to identify the main
1339/// thread when that happens – macOS's `pthread_main_np` function being a notable
1340/// exception – we cannot assign it the right name right then. Instead, in our
1341/// runtime startup code, we remember the thread ID of the main thread (through
1342/// this modules `set` function) and use it to identify the main thread from then
1343/// on. This works reliably and has the additional advantage that we can report
1344/// the right thread name on main even after the thread handle has been destroyed.
1345/// Note however that this also means that the name reported in pre-main functions
1346/// will be incorrect, but that's just something we have to live with.
1347pub(crate) mod main_thread {
1348 cfg_select! {
1349 target_has_atomic = "64" => {
1350 use super::ThreadId;
1351 use crate::sync::atomic::{Atomic, AtomicU64};
1352 use crate::sync::atomic::Ordering::Relaxed;
1353
1354 static MAIN: Atomic<u64> = AtomicU64::new(0);
1355
1356 pub(super) fn get() -> Option<ThreadId> {
1357 ThreadId::from_u64(MAIN.load(Relaxed))
1358 }
1359
1360 /// # Safety
1361 /// May only be called once.
1362 pub(crate) unsafe fn set(id: ThreadId) {
1363 MAIN.store(id.as_u64().get(), Relaxed)
1364 }
1365 }
1366 _ => {
1367 use super::ThreadId;
1368 use crate::mem::MaybeUninit;
1369 use crate::sync::atomic::{Atomic, AtomicBool};
1370 use crate::sync::atomic::Ordering::{Acquire, Release};
1371
1372 static INIT: Atomic<bool> = AtomicBool::new(false);
1373 static mut MAIN: MaybeUninit<ThreadId> = MaybeUninit::uninit();
1374
1375 pub(super) fn get() -> Option<ThreadId> {
1376 if INIT.load(Acquire) {
1377 Some(unsafe { MAIN.assume_init() })
1378 } else {
1379 None
1380 }
1381 }
1382
1383 /// # Safety
1384 /// May only be called once.
1385 pub(crate) unsafe fn set(id: ThreadId) {
1386 unsafe { MAIN = MaybeUninit::new(id) };
1387 INIT.store(true, Release);
1388 }
1389 }
1390 }
1391}
1392
1393/// Run a function with the current thread's name.
1394///
1395/// Modulo thread local accesses, this function is safe to call from signal
1396/// handlers and in similar circumstances where allocations are not possible.
1397pub(crate) fn with_current_name<F, R>(f: F) -> R
1398where
1399 F: FnOnce(Option<&str>) -> R,
1400{
1401 try_with_current(|thread| {
1402 if let Some(thread) = thread {
1403 // If there is a current thread handle, try to use the name stored
1404 // there.
1405 if let Some(name) = &thread.inner.name {
1406 return f(Some(name.as_str()));
1407 } else if Some(thread.inner.id) == main_thread::get() {
1408 // The main thread doesn't store its name in the handle, we must
1409 // identify it through its ID. Since we already have the `Thread`,
1410 // we can retrieve the ID from it instead of going through another
1411 // thread local.
1412 return f(Some("main"));
1413 }
1414 } else if let Some(main) = main_thread::get()
1415 && let Some(id) = current::id::get()
1416 && id == main
1417 {
1418 // The main thread doesn't always have a thread handle, we must
1419 // identify it through its ID instead. The checks are ordered so
1420 // that the current ID is only loaded if it is actually needed,
1421 // since loading it from TLS might need multiple expensive accesses.
1422 return f(Some("main"));
1423 }
1424
1425 f(None)
1426 })
1427}
1428
1429/// The internal representation of a `Thread` handle
1430///
1431/// We explicitly set the alignment for our guarantee in Thread::into_raw. This
1432/// allows applications to stuff extra metadata bits into the alignment, which
1433/// can be rather useful when working with atomics.
1434#[repr(align(8))]
1435struct Inner {
1436 name: Option<ThreadNameString>,
1437 id: ThreadId,
1438 parker: Parker,
1439}
1440
1441impl Inner {
1442 fn parker(self: Pin<&Self>) -> Pin<&Parker> {
1443 unsafe { Pin::map_unchecked(self, |inner| &inner.parker) }
1444 }
1445}
1446
1447#[derive(Clone)]
1448#[stable(feature = "rust1", since = "1.0.0")]
1449/// A handle to a thread.
1450///
1451/// Threads are represented via the `Thread` type, which you can get in one of
1452/// two ways:
1453///
1454/// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
1455/// function, and calling [`thread`][`JoinHandle::thread`] on the
1456/// [`JoinHandle`].
1457/// * By requesting the current thread, using the [`thread::current`] function.
1458///
1459/// The [`thread::current`] function is available even for threads not spawned
1460/// by the APIs of this module.
1461///
1462/// There is usually no need to create a `Thread` struct yourself, one
1463/// should instead use a function like `spawn` to create new threads, see the
1464/// docs of [`Builder`] and [`spawn`] for more details.
1465///
1466/// [`thread::current`]: current::current
1467pub struct Thread {
1468 inner: Pin<Arc<Inner>>,
1469}
1470
1471impl Thread {
1472 pub(crate) fn new(id: ThreadId, name: Option<String>) -> Thread {
1473 let name = name.map(ThreadNameString::from);
1474
1475 // We have to use `unsafe` here to construct the `Parker` in-place,
1476 // which is required for the UNIX implementation.
1477 //
1478 // SAFETY: We pin the Arc immediately after creation, so its address never
1479 // changes.
1480 let inner = unsafe {
1481 let mut arc = Arc::<Inner>::new_uninit();
1482 let ptr = Arc::get_mut_unchecked(&mut arc).as_mut_ptr();
1483 (&raw mut (*ptr).name).write(name);
1484 (&raw mut (*ptr).id).write(id);
1485 Parker::new_in_place(&raw mut (*ptr).parker);
1486 Pin::new_unchecked(arc.assume_init())
1487 };
1488
1489 Thread { inner }
1490 }
1491
1492 /// Like the public [`park`], but callable on any handle. This is used to
1493 /// allow parking in TLS destructors.
1494 ///
1495 /// # Safety
1496 /// May only be called from the thread to which this handle belongs.
1497 pub(crate) unsafe fn park(&self) {
1498 unsafe { self.inner.as_ref().parker().park() }
1499 }
1500
1501 /// Like the public [`park_timeout`], but callable on any handle. This is
1502 /// used to allow parking in TLS destructors.
1503 ///
1504 /// # Safety
1505 /// May only be called from the thread to which this handle belongs.
1506 pub(crate) unsafe fn park_timeout(&self, dur: Duration) {
1507 unsafe { self.inner.as_ref().parker().park_timeout(dur) }
1508 }
1509
1510 /// Atomically makes the handle's token available if it is not already.
1511 ///
1512 /// Every thread is equipped with some basic low-level blocking support, via
1513 /// the [`park`][park] function and the `unpark()` method. These can be
1514 /// used as a more CPU-efficient implementation of a spinlock.
1515 ///
1516 /// See the [park documentation][park] for more details.
1517 ///
1518 /// # Examples
1519 ///
1520 /// ```
1521 /// use std::thread;
1522 /// use std::time::Duration;
1523 /// use std::sync::atomic::{AtomicBool, Ordering};
1524 ///
1525 /// static QUEUED: AtomicBool = AtomicBool::new(false);
1526 ///
1527 /// let parked_thread = thread::Builder::new()
1528 /// .spawn(|| {
1529 /// println!("Parking thread");
1530 /// QUEUED.store(true, Ordering::Release);
1531 /// thread::park();
1532 /// println!("Thread unparked");
1533 /// })
1534 /// .unwrap();
1535 ///
1536 /// // Let some time pass for the thread to be spawned.
1537 /// thread::sleep(Duration::from_millis(10));
1538 ///
1539 /// // Wait until the other thread is queued.
1540 /// // This is crucial! It guarantees that the `unpark` below is not consumed
1541 /// // by some other code in the parked thread (e.g. inside `println!`).
1542 /// while !QUEUED.load(Ordering::Acquire) {
1543 /// // Spinning is of course inefficient; in practice, this would more likely be
1544 /// // a dequeue where we have no work to do if there's nobody queued.
1545 /// std::hint::spin_loop();
1546 /// }
1547 ///
1548 /// println!("Unpark the thread");
1549 /// parked_thread.thread().unpark();
1550 ///
1551 /// parked_thread.join().unwrap();
1552 /// ```
1553 #[stable(feature = "rust1", since = "1.0.0")]
1554 #[inline]
1555 pub fn unpark(&self) {
1556 self.inner.as_ref().parker().unpark();
1557 }
1558
1559 /// Gets the thread's unique identifier.
1560 ///
1561 /// # Examples
1562 ///
1563 /// ```
1564 /// use std::thread;
1565 ///
1566 /// let other_thread = thread::spawn(|| {
1567 /// thread::current().id()
1568 /// });
1569 ///
1570 /// let other_thread_id = other_thread.join().unwrap();
1571 /// assert!(thread::current().id() != other_thread_id);
1572 /// ```
1573 #[stable(feature = "thread_id", since = "1.19.0")]
1574 #[must_use]
1575 pub fn id(&self) -> ThreadId {
1576 self.inner.id
1577 }
1578
1579 /// Gets the thread's name.
1580 ///
1581 /// For more information about named threads, see
1582 /// [this module-level documentation][naming-threads].
1583 ///
1584 /// # Examples
1585 ///
1586 /// Threads by default have no name specified:
1587 ///
1588 /// ```
1589 /// use std::thread;
1590 ///
1591 /// let builder = thread::Builder::new();
1592 ///
1593 /// let handler = builder.spawn(|| {
1594 /// assert!(thread::current().name().is_none());
1595 /// }).unwrap();
1596 ///
1597 /// handler.join().unwrap();
1598 /// ```
1599 ///
1600 /// Thread with a specified name:
1601 ///
1602 /// ```
1603 /// use std::thread;
1604 ///
1605 /// let builder = thread::Builder::new()
1606 /// .name("foo".into());
1607 ///
1608 /// let handler = builder.spawn(|| {
1609 /// assert_eq!(thread::current().name(), Some("foo"))
1610 /// }).unwrap();
1611 ///
1612 /// handler.join().unwrap();
1613 /// ```
1614 ///
1615 /// [naming-threads]: ./index.html#naming-threads
1616 #[stable(feature = "rust1", since = "1.0.0")]
1617 #[must_use]
1618 pub fn name(&self) -> Option<&str> {
1619 if let Some(name) = &self.inner.name {
1620 Some(name.as_str())
1621 } else if main_thread::get() == Some(self.inner.id) {
1622 Some("main")
1623 } else {
1624 None
1625 }
1626 }
1627
1628 /// Consumes the `Thread`, returning a raw pointer.
1629 ///
1630 /// To avoid a memory leak the pointer must be converted
1631 /// back into a `Thread` using [`Thread::from_raw`]. The pointer is
1632 /// guaranteed to be aligned to at least 8 bytes.
1633 ///
1634 /// # Examples
1635 ///
1636 /// ```
1637 /// #![feature(thread_raw)]
1638 ///
1639 /// use std::thread::{self, Thread};
1640 ///
1641 /// let thread = thread::current();
1642 /// let id = thread.id();
1643 /// let ptr = Thread::into_raw(thread);
1644 /// unsafe {
1645 /// assert_eq!(Thread::from_raw(ptr).id(), id);
1646 /// }
1647 /// ```
1648 #[unstable(feature = "thread_raw", issue = "97523")]
1649 pub fn into_raw(self) -> *const () {
1650 // Safety: We only expose an opaque pointer, which maintains the `Pin` invariant.
1651 let inner = unsafe { Pin::into_inner_unchecked(self.inner) };
1652 Arc::into_raw(inner) as *const ()
1653 }
1654
1655 /// Constructs a `Thread` from a raw pointer.
1656 ///
1657 /// The raw pointer must have been previously returned
1658 /// by a call to [`Thread::into_raw`].
1659 ///
1660 /// # Safety
1661 ///
1662 /// This function is unsafe because improper use may lead
1663 /// to memory unsafety, even if the returned `Thread` is never
1664 /// accessed.
1665 ///
1666 /// Creating a `Thread` from a pointer other than one returned
1667 /// from [`Thread::into_raw`] is **undefined behavior**.
1668 ///
1669 /// Calling this function twice on the same raw pointer can lead
1670 /// to a double-free if both `Thread` instances are dropped.
1671 #[unstable(feature = "thread_raw", issue = "97523")]
1672 pub unsafe fn from_raw(ptr: *const ()) -> Thread {
1673 // Safety: Upheld by caller.
1674 unsafe { Thread { inner: Pin::new_unchecked(Arc::from_raw(ptr as *const Inner)) } }
1675 }
1676
1677 fn cname(&self) -> Option<&CStr> {
1678 if let Some(name) = &self.inner.name {
1679 Some(name.as_cstr())
1680 } else if main_thread::get() == Some(self.inner.id) {
1681 Some(c"main")
1682 } else {
1683 None
1684 }
1685 }
1686}
1687
1688#[stable(feature = "rust1", since = "1.0.0")]
1689impl fmt::Debug for Thread {
1690 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1691 f.debug_struct("Thread")
1692 .field("id", &self.id())
1693 .field("name", &self.name())
1694 .finish_non_exhaustive()
1695 }
1696}
1697
1698////////////////////////////////////////////////////////////////////////////////
1699// JoinHandle
1700////////////////////////////////////////////////////////////////////////////////
1701
1702/// A specialized [`Result`] type for threads.
1703///
1704/// Indicates the manner in which a thread exited.
1705///
1706/// The value contained in the `Result::Err` variant
1707/// is the value the thread panicked with;
1708/// that is, the argument the `panic!` macro was called with.
1709/// Unlike with normal errors, this value doesn't implement
1710/// the [`Error`](crate::error::Error) trait.
1711///
1712/// Thus, a sensible way to handle a thread panic is to either:
1713///
1714/// 1. propagate the panic with [`std::panic::resume_unwind`]
1715/// 2. or in case the thread is intended to be a subsystem boundary
1716/// that is supposed to isolate system-level failures,
1717/// match on the `Err` variant and handle the panic in an appropriate way
1718///
1719/// A thread that completes without panicking is considered to exit successfully.
1720///
1721/// # Examples
1722///
1723/// Matching on the result of a joined thread:
1724///
1725/// ```no_run
1726/// use std::{fs, thread, panic};
1727///
1728/// fn copy_in_thread() -> thread::Result<()> {
1729/// thread::spawn(|| {
1730/// fs::copy("foo.txt", "bar.txt").unwrap();
1731/// }).join()
1732/// }
1733///
1734/// fn main() {
1735/// match copy_in_thread() {
1736/// Ok(_) => println!("copy succeeded"),
1737/// Err(e) => panic::resume_unwind(e),
1738/// }
1739/// }
1740/// ```
1741///
1742/// [`Result`]: crate::result::Result
1743/// [`std::panic::resume_unwind`]: crate::panic::resume_unwind
1744#[stable(feature = "rust1", since = "1.0.0")]
1745#[doc(search_unbox)]
1746pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>;
1747
1748// This packet is used to communicate the return value between the spawned
1749// thread and the rest of the program. It is shared through an `Arc` and
1750// there's no need for a mutex here because synchronization happens with `join()`
1751// (the caller will never read this packet until the thread has exited).
1752//
1753// An Arc to the packet is stored into a `JoinInner` which in turns is placed
1754// in `JoinHandle`.
1755struct Packet<'scope, T> {
1756 scope: Option<Arc<scoped::ScopeData>>,
1757 result: UnsafeCell<Option<Result<T>>>,
1758 _marker: PhantomData<Option<&'scope scoped::ScopeData>>,
1759}
1760
1761// Due to the usage of `UnsafeCell` we need to manually implement Sync.
1762// The type `T` should already always be Send (otherwise the thread could not
1763// have been created) and the Packet is Sync because all access to the
1764// `UnsafeCell` synchronized (by the `join()` boundary), and `ScopeData` is Sync.
1765unsafe impl<'scope, T: Send> Sync for Packet<'scope, T> {}
1766
1767impl<'scope, T> Drop for Packet<'scope, T> {
1768 fn drop(&mut self) {
1769 // If this packet was for a thread that ran in a scope, the thread
1770 // panicked, and nobody consumed the panic payload, we make sure
1771 // the scope function will panic.
1772 let unhandled_panic = matches!(self.result.get_mut(), Some(Err(_)));
1773 // Drop the result without causing unwinding.
1774 // This is only relevant for threads that aren't join()ed, as
1775 // join() will take the `result` and set it to None, such that
1776 // there is nothing left to drop here.
1777 // If this panics, we should handle that, because we're outside the
1778 // outermost `catch_unwind` of our thread.
1779 // We just abort in that case, since there's nothing else we can do.
1780 // (And even if we tried to handle it somehow, we'd also need to handle
1781 // the case where the panic payload we get out of it also panics on
1782 // drop, and so on. See issue #86027.)
1783 if let Err(_) = panic::catch_unwind(panic::AssertUnwindSafe(|| {
1784 *self.result.get_mut() = None;
1785 })) {
1786 rtabort!("thread result panicked on drop");
1787 }
1788 // Book-keeping so the scope knows when it's done.
1789 if let Some(scope) = &self.scope {
1790 // Now that there will be no more user code running on this thread
1791 // that can use 'scope, mark the thread as 'finished'.
1792 // It's important we only do this after the `result` has been dropped,
1793 // since dropping it might still use things it borrowed from 'scope.
1794 scope.decrement_num_running_threads(unhandled_panic);
1795 }
1796 }
1797}
1798
1799/// Inner representation for JoinHandle
1800struct JoinInner<'scope, T> {
1801 native: imp::Thread,
1802 thread: Thread,
1803 packet: Arc<Packet<'scope, T>>,
1804}
1805
1806impl<'scope, T> JoinInner<'scope, T> {
1807 fn join(mut self) -> Result<T> {
1808 self.native.join();
1809 Arc::get_mut(&mut self.packet)
1810 // FIXME(fuzzypixelz): returning an error instead of panicking here
1811 // would require updating the documentation of
1812 // `std::thread::Result`; currently we can return `Err` if and only
1813 // if the thread had panicked.
1814 .expect("threads should not terminate unexpectedly")
1815 .result
1816 .get_mut()
1817 .take()
1818 .unwrap()
1819 }
1820}
1821
1822/// An owned permission to join on a thread (block on its termination).
1823///
1824/// A `JoinHandle` *detaches* the associated thread when it is dropped, which
1825/// means that there is no longer any handle to the thread and no way to `join`
1826/// on it.
1827///
1828/// Due to platform restrictions, it is not possible to [`Clone`] this
1829/// handle: the ability to join a thread is a uniquely-owned permission.
1830///
1831/// This `struct` is created by the [`thread::spawn`] function and the
1832/// [`thread::Builder::spawn`] method.
1833///
1834/// # Examples
1835///
1836/// Creation from [`thread::spawn`]:
1837///
1838/// ```
1839/// use std::thread;
1840///
1841/// let join_handle: thread::JoinHandle<_> = thread::spawn(|| {
1842/// // some work here
1843/// });
1844/// ```
1845///
1846/// Creation from [`thread::Builder::spawn`]:
1847///
1848/// ```
1849/// use std::thread;
1850///
1851/// let builder = thread::Builder::new();
1852///
1853/// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1854/// // some work here
1855/// }).unwrap();
1856/// ```
1857///
1858/// A thread being detached and outliving the thread that spawned it:
1859///
1860/// ```no_run
1861/// use std::thread;
1862/// use std::time::Duration;
1863///
1864/// let original_thread = thread::spawn(|| {
1865/// let _detached_thread = thread::spawn(|| {
1866/// // Here we sleep to make sure that the first thread returns before.
1867/// thread::sleep(Duration::from_millis(10));
1868/// // This will be called, even though the JoinHandle is dropped.
1869/// println!("♫ Still alive ♫");
1870/// });
1871/// });
1872///
1873/// original_thread.join().expect("The thread being joined has panicked");
1874/// println!("Original thread is joined.");
1875///
1876/// // We make sure that the new thread has time to run, before the main
1877/// // thread returns.
1878///
1879/// thread::sleep(Duration::from_millis(1000));
1880/// ```
1881///
1882/// [`thread::Builder::spawn`]: Builder::spawn
1883/// [`thread::spawn`]: spawn
1884#[stable(feature = "rust1", since = "1.0.0")]
1885#[cfg_attr(target_os = "teeos", must_use)]
1886pub struct JoinHandle<T>(JoinInner<'static, T>);
1887
1888#[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1889unsafe impl<T> Send for JoinHandle<T> {}
1890#[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1891unsafe impl<T> Sync for JoinHandle<T> {}
1892
1893impl<T> JoinHandle<T> {
1894 /// Extracts a handle to the underlying thread.
1895 ///
1896 /// # Examples
1897 ///
1898 /// ```
1899 /// use std::thread;
1900 ///
1901 /// let builder = thread::Builder::new();
1902 ///
1903 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1904 /// // some work here
1905 /// }).unwrap();
1906 ///
1907 /// let thread = join_handle.thread();
1908 /// println!("thread id: {:?}", thread.id());
1909 /// ```
1910 #[stable(feature = "rust1", since = "1.0.0")]
1911 #[must_use]
1912 pub fn thread(&self) -> &Thread {
1913 &self.0.thread
1914 }
1915
1916 /// Waits for the associated thread to finish.
1917 ///
1918 /// This function will return immediately if the associated thread has already finished.
1919 ///
1920 /// In terms of [atomic memory orderings], the completion of the associated
1921 /// thread synchronizes with this function returning. In other words, all
1922 /// operations performed by that thread [happen
1923 /// before](https://doc.rust-lang.org/nomicon/atomics.html#data-accesses) all
1924 /// operations that happen after `join` returns.
1925 ///
1926 /// If the associated thread panics, [`Err`] is returned with the parameter given
1927 /// to [`panic!`] (though see the Notes below).
1928 ///
1929 /// [`Err`]: crate::result::Result::Err
1930 /// [atomic memory orderings]: crate::sync::atomic
1931 ///
1932 /// # Panics
1933 ///
1934 /// This function may panic on some platforms if a thread attempts to join
1935 /// itself or otherwise may create a deadlock with joining threads.
1936 ///
1937 /// # Examples
1938 ///
1939 /// ```
1940 /// use std::thread;
1941 ///
1942 /// let builder = thread::Builder::new();
1943 ///
1944 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1945 /// // some work here
1946 /// }).unwrap();
1947 /// join_handle.join().expect("Couldn't join on the associated thread");
1948 /// ```
1949 ///
1950 /// # Notes
1951 ///
1952 /// If a "foreign" unwinding operation (e.g. an exception thrown from C++
1953 /// code, or a `panic!` in Rust code compiled or linked with a different
1954 /// runtime) unwinds all the way to the thread root, the process may be
1955 /// aborted; see the Notes on [`thread::spawn`]. If the process is not
1956 /// aborted, this function will return a `Result::Err` containing an opaque
1957 /// type.
1958 ///
1959 /// [`catch_unwind`]: ../../std/panic/fn.catch_unwind.html
1960 /// [`thread::spawn`]: spawn
1961 #[stable(feature = "rust1", since = "1.0.0")]
1962 pub fn join(self) -> Result<T> {
1963 self.0.join()
1964 }
1965
1966 /// Checks if the associated thread has finished running its main function.
1967 ///
1968 /// `is_finished` supports implementing a non-blocking join operation, by checking
1969 /// `is_finished`, and calling `join` if it returns `true`. This function does not block. To
1970 /// block while waiting on the thread to finish, use [`join`][Self::join].
1971 ///
1972 /// This might return `true` for a brief moment after the thread's main
1973 /// function has returned, but before the thread itself has stopped running.
1974 /// However, once this returns `true`, [`join`][Self::join] can be expected
1975 /// to return quickly, without blocking for any significant amount of time.
1976 #[stable(feature = "thread_is_running", since = "1.61.0")]
1977 pub fn is_finished(&self) -> bool {
1978 Arc::strong_count(&self.0.packet) == 1
1979 }
1980}
1981
1982impl<T> AsInner<imp::Thread> for JoinHandle<T> {
1983 fn as_inner(&self) -> &imp::Thread {
1984 &self.0.native
1985 }
1986}
1987
1988impl<T> IntoInner<imp::Thread> for JoinHandle<T> {
1989 fn into_inner(self) -> imp::Thread {
1990 self.0.native
1991 }
1992}
1993
1994#[stable(feature = "std_debug", since = "1.16.0")]
1995impl<T> fmt::Debug for JoinHandle<T> {
1996 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1997 f.debug_struct("JoinHandle").finish_non_exhaustive()
1998 }
1999}
2000
2001fn _assert_sync_and_send() {
2002 fn _assert_both<T: Send + Sync>() {}
2003 _assert_both::<JoinHandle<()>>();
2004 _assert_both::<Thread>();
2005}
2006
2007/// Returns an estimate of the default amount of parallelism a program should use.
2008///
2009/// Parallelism is a resource. A given machine provides a certain capacity for
2010/// parallelism, i.e., a bound on the number of computations it can perform
2011/// simultaneously. This number often corresponds to the amount of CPUs a
2012/// computer has, but it may diverge in various cases.
2013///
2014/// Host environments such as VMs or container orchestrators may want to
2015/// restrict the amount of parallelism made available to programs in them. This
2016/// is often done to limit the potential impact of (unintentionally)
2017/// resource-intensive programs on other programs running on the same machine.
2018///
2019/// # Limitations
2020///
2021/// The purpose of this API is to provide an easy and portable way to query
2022/// the default amount of parallelism the program should use. Among other things it
2023/// does not expose information on NUMA regions, does not account for
2024/// differences in (co)processor capabilities or current system load,
2025/// and will not modify the program's global state in order to more accurately
2026/// query the amount of available parallelism.
2027///
2028/// Where both fixed steady-state and burst limits are available the steady-state
2029/// capacity will be used to ensure more predictable latencies.
2030///
2031/// Resource limits can be changed during the runtime of a program, therefore the value is
2032/// not cached and instead recomputed every time this function is called. It should not be
2033/// called from hot code.
2034///
2035/// The value returned by this function should be considered a simplified
2036/// approximation of the actual amount of parallelism available at any given
2037/// time. To get a more detailed or precise overview of the amount of
2038/// parallelism available to the program, you may wish to use
2039/// platform-specific APIs as well. The following platform limitations currently
2040/// apply to `available_parallelism`:
2041///
2042/// On Windows:
2043/// - It may undercount the amount of parallelism available on systems with more
2044/// than 64 logical CPUs. However, programs typically need specific support to
2045/// take advantage of more than 64 logical CPUs, and in the absence of such
2046/// support, the number returned by this function accurately reflects the
2047/// number of logical CPUs the program can use by default.
2048/// - It may overcount the amount of parallelism available on systems limited by
2049/// process-wide affinity masks, or job object limitations.
2050///
2051/// On Linux:
2052/// - It may overcount the amount of parallelism available when limited by a
2053/// process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be
2054/// queried, e.g. due to sandboxing.
2055/// - It may undercount the amount of parallelism if the current thread's affinity mask
2056/// does not reflect the process' cpuset, e.g. due to pinned threads.
2057/// - If the process is in a cgroup v1 cpu controller, this may need to
2058/// scan mountpoints to find the corresponding cgroup v1 controller,
2059/// which may take time on systems with large numbers of mountpoints.
2060/// (This does not apply to cgroup v2, or to processes not in a
2061/// cgroup.)
2062/// - It does not attempt to take `ulimit` into account. If there is a limit set on the number of
2063/// threads, `available_parallelism` cannot know how much of that limit a Rust program should
2064/// take, or know in a reliable and race-free way how much of that limit is already taken.
2065///
2066/// On all targets:
2067/// - It may overcount the amount of parallelism available when running in a VM
2068/// with CPU usage limits (e.g. an overcommitted host).
2069///
2070/// # Errors
2071///
2072/// This function will, but is not limited to, return errors in the following
2073/// cases:
2074///
2075/// - If the amount of parallelism is not known for the target platform.
2076/// - If the program lacks permission to query the amount of parallelism made
2077/// available to it.
2078///
2079/// # Examples
2080///
2081/// ```
2082/// # #![allow(dead_code)]
2083/// use std::{io, thread};
2084///
2085/// fn main() -> io::Result<()> {
2086/// let count = thread::available_parallelism()?.get();
2087/// assert!(count >= 1_usize);
2088/// Ok(())
2089/// }
2090/// ```
2091#[doc(alias = "available_concurrency")] // Alias for a previous name we gave this API on unstable.
2092#[doc(alias = "hardware_concurrency")] // Alias for C++ `std::thread::hardware_concurrency`.
2093#[doc(alias = "num_cpus")] // Alias for a popular ecosystem crate which provides similar functionality.
2094#[stable(feature = "available_parallelism", since = "1.59.0")]
2095pub fn available_parallelism() -> io::Result<NonZero<usize>> {
2096 imp::available_parallelism()
2097}