Skip to main content

std/thread/
functions.rs

1//! Free functions.
2
3use super::builder::Builder;
4use super::current::current;
5use super::join_handle::JoinHandle;
6use crate::mem::forget;
7use crate::num::NonZero;
8use crate::sys::thread as imp;
9use crate::time::{Duration, Instant};
10use crate::{io, panicking};
11
12/// Spawns a new thread, returning a [`JoinHandle`] for it.
13///
14/// The join handle provides a [`join`] method that can be used to join the spawned
15/// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing
16/// the argument given to [`panic!`].
17///
18/// If the join handle is dropped, the spawned thread will implicitly be *detached*.
19/// In this case, the spawned thread may no longer be joined.
20/// (It is the responsibility of the program to either eventually join threads it
21/// creates or detach them; otherwise, a resource leak will result.)
22///
23/// This function creates a thread with the default parameters of [`Builder`].
24/// To specify the new thread's stack size or the name, use [`Builder::spawn`].
25///
26/// As you can see in the signature of `spawn` there are two constraints on
27/// both the closure given to `spawn` and its return value, let's explain them:
28///
29/// - The `'static` constraint means that the closure and its return value
30///   must have a lifetime of the whole program execution. The reason for this
31///   is that threads can outlive the lifetime they have been created in.
32///
33///   Indeed if the thread, and by extension its return value, can outlive their
34///   caller, we need to make sure that they will be valid afterwards, and since
35///   we *can't* know when it will return we need to have them valid as long as
36///   possible, that is until the end of the program, hence the `'static`
37///   lifetime.
38/// - The [`Send`] constraint is because the closure will need to be passed
39///   *by value* from the thread where it is spawned to the new thread. Its
40///   return value will need to be passed from the new thread to the thread
41///   where it is `join`ed.
42///   As a reminder, the [`Send`] marker trait expresses that it is safe to be
43///   passed from thread to thread. [`Sync`] expresses that it is safe to have a
44///   reference be passed from thread to thread.
45///
46/// # Panics
47///
48/// Panics if the OS fails to create a thread; use [`Builder::spawn`]
49/// to recover from such errors.
50///
51/// # Examples
52///
53/// Creating a thread.
54///
55/// ```
56/// use std::thread;
57///
58/// let handler = thread::spawn(|| {
59///     // thread code
60/// });
61///
62/// handler.join().unwrap();
63/// ```
64///
65/// As mentioned in the module documentation, threads are usually made to
66/// communicate using [`channels`], here is how it usually looks.
67///
68/// This example also shows how to use `move`, in order to give ownership
69/// of values to a thread.
70///
71/// ```
72/// use std::thread;
73/// use std::sync::mpsc::channel;
74///
75/// let (tx, rx) = channel();
76///
77/// let sender = thread::spawn(move || {
78///     tx.send("Hello, thread".to_owned())
79///         .expect("Unable to send on channel");
80/// });
81///
82/// let receiver = thread::spawn(move || {
83///     let value = rx.recv().expect("Unable to receive from channel");
84///     println!("{value}");
85/// });
86///
87/// sender.join().expect("The sender thread has panicked");
88/// receiver.join().expect("The receiver thread has panicked");
89/// ```
90///
91/// A thread can also return a value through its [`JoinHandle`], you can use
92/// this to make asynchronous computations (futures might be more appropriate
93/// though).
94///
95/// ```
96/// use std::thread;
97///
98/// let computation = thread::spawn(|| {
99///     // Some expensive computation.
100///     42
101/// });
102///
103/// let result = computation.join().unwrap();
104/// println!("{result}");
105/// ```
106///
107/// # Notes
108///
109/// This function has the same minimal guarantee regarding "foreign" unwinding operations (e.g.
110/// an exception thrown from C++ code, or a `panic!` in Rust code compiled or linked with a
111/// different runtime) as [`catch_unwind`]; namely, if the thread created with `thread::spawn`
112/// unwinds all the way to the root with such an exception, one of two behaviors are possible,
113/// and it is unspecified which will occur:
114///
115/// * The process aborts.
116/// * The process does not abort, and [`join`] will return a `Result::Err`
117///   containing an opaque type.
118///
119/// [`catch_unwind`]: ../../std/panic/fn.catch_unwind.html
120/// [`channels`]: crate::sync::mpsc
121/// [`join`]: JoinHandle::join
122/// [`Err`]: crate::result::Result::Err
123#[stable(feature = "rust1", since = "1.0.0")]
124#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
125pub fn spawn<F, T>(f: F) -> JoinHandle<T>
126where
127    F: FnOnce() -> T,
128    F: Send + 'static,
129    T: Send + 'static,
130{
131    Builder::new().spawn(f).expect("failed to spawn thread")
132}
133
134/// Cooperatively gives up a timeslice to the OS scheduler.
135///
136/// This calls the underlying OS scheduler's yield primitive, signaling
137/// that the calling thread is willing to give up its remaining timeslice
138/// so that the OS may schedule other threads on the CPU.
139///
140/// A drawback of yielding in a loop is that if the OS does not have any
141/// other ready threads to run on the current CPU, the thread will effectively
142/// busy-wait, which wastes CPU time and energy.
143///
144/// Therefore, when waiting for events of interest, a programmer's first
145/// choice should be to use synchronization devices such as [`channel`]s,
146/// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are
147/// implemented in a blocking manner, giving up the CPU until the event
148/// of interest has occurred which avoids repeated yielding.
149///
150/// `yield_now` should thus be used only rarely, mostly in situations where
151/// repeated polling is required because there is no other suitable way to
152/// learn when an event of interest has occurred.
153///
154/// # Examples
155///
156/// ```
157/// use std::thread;
158///
159/// thread::yield_now();
160/// ```
161///
162/// [`channel`]: crate::sync::mpsc
163/// [`join`]: JoinHandle::join
164/// [`Condvar`]: crate::sync::Condvar
165/// [`Mutex`]: crate::sync::Mutex
166#[stable(feature = "rust1", since = "1.0.0")]
167pub fn yield_now() {
168    imp::yield_now()
169}
170
171/// Determines whether the current thread is panicking.
172///
173/// This returns `true` both when the thread is unwinding due to a panic,
174/// or executing a panic hook. Note that the latter case will still happen
175/// when `panic=abort` is set.
176///
177/// A common use of this feature is to poison shared resources when writing
178/// unsafe code, by checking `panicking` when the `drop` is called.
179///
180/// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
181/// already poison themselves when a thread panics while holding the lock.
182///
183/// This can also be used in multithreaded applications, in order to send a
184/// message to other threads warning that a thread has panicked (e.g., for
185/// monitoring purposes).
186///
187/// # Examples
188///
189/// ```should_panic
190/// use std::thread;
191///
192/// struct SomeStruct;
193///
194/// impl Drop for SomeStruct {
195///     fn drop(&mut self) {
196///         if thread::panicking() {
197///             println!("dropped while unwinding");
198///         } else {
199///             println!("dropped while not unwinding");
200///         }
201///     }
202/// }
203///
204/// {
205///     print!("a: ");
206///     let a = SomeStruct;
207/// }
208///
209/// {
210///     print!("b: ");
211///     let b = SomeStruct;
212///     panic!()
213/// }
214/// ```
215///
216/// [Mutex]: crate::sync::Mutex
217#[inline]
218#[must_use]
219#[stable(feature = "rust1", since = "1.0.0")]
220pub fn panicking() -> bool {
221    panicking::panicking()
222}
223
224/// Uses [`sleep`].
225///
226/// Puts the current thread to sleep for at least the specified amount of time.
227///
228/// The thread may sleep longer than the duration specified due to scheduling
229/// specifics or platform-dependent functionality. It will never sleep less.
230///
231/// This function is blocking, and should not be used in `async` functions.
232///
233/// # Platform-specific behavior
234///
235/// On Unix platforms, the underlying syscall may be interrupted by a
236/// spurious wakeup or signal handler. To ensure the sleep occurs for at least
237/// the specified duration, this function may invoke that system call multiple
238/// times.
239///
240/// # Examples
241///
242/// ```no_run
243/// use std::thread;
244///
245/// // Let's sleep for 2 seconds:
246/// thread::sleep_ms(2000);
247/// ```
248#[stable(feature = "rust1", since = "1.0.0")]
249#[deprecated(since = "1.6.0", note = "replaced by `std::thread::sleep`")]
250pub fn sleep_ms(ms: u32) {
251    sleep(Duration::from_millis(ms as u64))
252}
253
254/// Puts the current thread to sleep for at least the specified amount of time.
255///
256/// The thread may sleep longer than the duration specified due to scheduling
257/// specifics or platform-dependent functionality. It will never sleep less.
258///
259/// This function is blocking, and should not be used in `async` functions.
260///
261/// # Platform-specific behavior
262///
263/// On Unix platforms, the underlying syscall may be interrupted by a
264/// spurious wakeup or signal handler. To ensure the sleep occurs for at least
265/// the specified duration, this function may invoke that system call multiple
266/// times.
267/// Platforms which do not support nanosecond precision for sleeping will
268/// have `dur` rounded up to the nearest granularity of time they can sleep for.
269///
270/// Currently, specifying a zero duration on Unix platforms returns immediately
271/// without invoking the underlying [`nanosleep`] syscall, whereas on Windows
272/// platforms the underlying [`Sleep`] syscall is always invoked.
273/// If the intention is to yield the current time-slice you may want to use
274/// [`yield_now`] instead.
275///
276/// [`nanosleep`]: https://linux.die.net/man/2/nanosleep
277/// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
278///
279/// # Examples
280///
281/// ```no_run
282/// use std::{thread, time};
283///
284/// let ten_millis = time::Duration::from_millis(10);
285/// let now = time::Instant::now();
286///
287/// thread::sleep(ten_millis);
288///
289/// assert!(now.elapsed() >= ten_millis);
290/// ```
291#[stable(feature = "thread_sleep", since = "1.4.0")]
292pub fn sleep(dur: Duration) {
293    imp::sleep(dur)
294}
295
296/// Puts the current thread to sleep until the specified deadline has passed.
297///
298/// The thread may still be asleep after the deadline specified due to
299/// scheduling specifics or platform-dependent functionality. It will never
300/// wake before.
301///
302/// This function is blocking, and should not be used in `async` functions.
303///
304/// # Platform-specific behavior
305///
306/// In most cases this function will call an OS specific function. Where that
307/// is not supported [`sleep`] is used. Those platforms are referred to as other
308/// in the table below.
309///
310/// # Underlying System calls
311///
312/// The following system calls are [currently] being used:
313///
314/// |  Platform |               System call                                            |
315/// |-----------|----------------------------------------------------------------------|
316/// | Linux     | [clock_nanosleep] (Monotonic Clock)                                  |
317/// | BSD except OpenBSD | [clock_nanosleep] (Monotonic Clock)                         |
318/// | Android   | [clock_nanosleep] (Monotonic Clock)                                  |
319/// | Solaris   | [clock_nanosleep] (Monotonic Clock)                                  |
320/// | Illumos   | [clock_nanosleep] (Monotonic Clock)                                  |
321/// | Dragonfly | [clock_nanosleep] (Monotonic Clock)                                  |
322/// | Hurd      | [clock_nanosleep] (Monotonic Clock)                                  |
323/// | Vxworks   | [clock_nanosleep] (Monotonic Clock)                                  |
324/// | Apple     | `mach_wait_until`                                                    |
325/// | Other     | `sleep_until` uses [`sleep`] and does not issue a syscall itself     |
326///
327/// [currently]: crate::io#platform-specific-behavior
328/// [clock_nanosleep]: https://linux.die.net/man/3/clock_nanosleep
329///
330/// **Disclaimer:** These system calls might change over time.
331///
332/// # Examples
333///
334/// A simple game loop that limits the game to 60 frames per second.
335///
336/// ```no_run
337/// #![feature(thread_sleep_until)]
338/// # use std::time::{Duration, Instant};
339/// # use std::thread;
340/// #
341/// # fn update() {}
342/// # fn render() {}
343/// #
344/// let max_fps = 60.0;
345/// let frame_time = Duration::from_secs_f32(1.0/max_fps);
346/// let mut next_frame = Instant::now();
347/// loop {
348///     thread::sleep_until(next_frame);
349///     next_frame += frame_time;
350///     update();
351///     render();
352/// }
353/// ```
354///
355/// A slow API we must not call too fast and which takes a few
356/// tries before succeeding. By using `sleep_until` the time the
357/// API call takes does not influence when we retry or when we give up
358///
359/// ```no_run
360/// #![feature(thread_sleep_until)]
361/// # use std::time::{Duration, Instant};
362/// # use std::thread;
363/// #
364/// # enum Status {
365/// #     Ready(usize),
366/// #     Waiting,
367/// # }
368/// # fn slow_web_api_call() -> Status { Status::Ready(42) }
369/// #
370/// # const MAX_DURATION: Duration = Duration::from_secs(10);
371/// #
372/// # fn try_api_call() -> Result<usize, ()> {
373/// let deadline = Instant::now() + MAX_DURATION;
374/// let delay = Duration::from_millis(250);
375/// let mut next_attempt = Instant::now();
376/// loop {
377///     if Instant::now() > deadline {
378///         break Err(());
379///     }
380///     if let Status::Ready(data) = slow_web_api_call() {
381///         break Ok(data);
382///     }
383///
384///     next_attempt = deadline.min(next_attempt + delay);
385///     thread::sleep_until(next_attempt);
386/// }
387/// # }
388/// # let _data = try_api_call();
389/// ```
390#[unstable(feature = "thread_sleep_until", issue = "113752")]
391pub fn sleep_until(deadline: Instant) {
392    imp::sleep_until(deadline)
393}
394
395/// Used to ensure that `park` and `park_timeout` do not unwind, as that can
396/// cause undefined behavior if not handled correctly (see #102398 for context).
397struct PanicGuard;
398
399impl Drop for PanicGuard {
400    fn drop(&mut self) {
401        rtabort!("an irrecoverable error occurred while synchronizing threads")
402    }
403}
404
405/// Blocks unless or until the current thread's token is made available.
406///
407/// A call to `park` does not guarantee that the thread will remain parked
408/// forever, and callers should be prepared for this possibility. However,
409/// it is guaranteed that this function will not panic (it may abort the
410/// process if the implementation encounters some rare errors).
411///
412/// # `park` and `unpark`
413///
414/// Every thread is equipped with some basic low-level blocking support, via the
415/// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
416/// method. [`park`] blocks the current thread, which can then be resumed from
417/// another thread by calling the [`unpark`] method on the blocked thread's
418/// handle.
419///
420/// Conceptually, each [`Thread`] handle has an associated token, which is
421/// initially not present:
422///
423/// * The [`thread::park`][`park`] function blocks the current thread unless or
424///   until the token is available for its thread handle, at which point it
425///   atomically consumes the token. It may also return *spuriously*, without
426///   consuming the token. [`thread::park_timeout`] does the same, but allows
427///   specifying a maximum time to block the thread for.
428///
429/// * The [`unpark`] method on a [`Thread`] atomically makes the token available
430///   if it wasn't already. Because the token can be held by a thread even if it is currently not
431///   parked, [`unpark`] followed by [`park`] will result in the second call returning immediately.
432///   However, note that to rely on this guarantee, you need to make sure that your `unpark` happens
433///   after all `park` that may be done by other data structures!
434///
435/// The API is typically used by acquiring a handle to the current thread, placing that handle in a
436/// shared data structure so that other threads can find it, and then `park`ing in a loop. When some
437/// desired condition is met, another thread calls [`unpark`] on the handle. The last bullet point
438/// above guarantees that even if the `unpark` occurs before the thread is finished `park`ing, it
439/// will be woken up properly.
440///
441/// Note that the coordination via the shared data structure is crucial: If you `unpark` a thread
442/// without first establishing that it is about to be `park`ing within your code, that `unpark` may
443/// get consumed by a *different* `park` in the same thread, leading to a deadlock. This also means
444/// you must not call unknown code between setting up for parking and calling `park`; for instance,
445/// if you invoke `println!`, that may itself call `park` and thus consume your `unpark` and cause a
446/// deadlock.
447///
448/// The motivation for this design is twofold:
449///
450/// * It avoids the need to allocate mutexes and condvars when building new
451///   synchronization primitives; the threads already provide basic
452///   blocking/signaling.
453///
454/// * It can be implemented very efficiently on many platforms.
455///
456/// # Memory Ordering
457///
458/// Calls to `unpark` _synchronize-with_ calls to `park`, meaning that memory
459/// operations performed before a call to `unpark` are made visible to the thread that
460/// consumes the token and returns from `park`. Note that all `park` and `unpark`
461/// operations for a given thread form a total order and _all_ prior `unpark` operations
462/// synchronize-with `park`.
463///
464/// In atomic ordering terms, `unpark` performs a `Release` operation and `park`
465/// performs the corresponding `Acquire` operation. Calls to `unpark` for the same
466/// thread form a [release sequence].
467///
468/// Note that being unblocked does not imply a call was made to `unpark`, because
469/// wakeups can also be spurious. For example, a valid, but inefficient,
470/// implementation could have `park` and `unpark` return immediately without doing anything,
471/// making *all* wakeups spurious.
472///
473/// # Examples
474///
475/// ```
476/// use std::thread;
477/// use std::sync::atomic::{Ordering, AtomicBool};
478/// use std::time::Duration;
479///
480/// static QUEUED: AtomicBool = AtomicBool::new(false);
481/// static FLAG: AtomicBool = AtomicBool::new(false);
482///
483/// let parked_thread = thread::spawn(move || {
484///     println!("Thread spawned");
485///     // Signal that we are going to `park`. Between this store and our `park`, there may
486///     // be no other `park`, or else that `park` could consume our `unpark` token!
487///     QUEUED.store(true, Ordering::Release);
488///     // We want to wait until the flag is set. We *could* just spin, but using
489///     // park/unpark is more efficient.
490///     while !FLAG.load(Ordering::Acquire) {
491///         // We can *not* use `println!` here since that could use thread parking internally.
492///         thread::park();
493///         // We *could* get here spuriously, i.e., way before the 10ms below are over!
494///         // But that is no problem, we are in a loop until the flag is set anyway.
495///     }
496///     println!("Flag received");
497/// });
498///
499/// // Let some time pass for the thread to be spawned.
500/// thread::sleep(Duration::from_millis(10));
501///
502/// // Ensure the thread is about to park.
503/// // This is crucial! It guarantees that the `unpark` below is not consumed
504/// // by some other code in the parked thread (e.g. inside `println!`).
505/// while !QUEUED.load(Ordering::Acquire) {
506///     // Spinning is of course inefficient; in practice, this would more likely be
507///     // a dequeue where we have no work to do if there's nobody queued.
508///     std::hint::spin_loop();
509/// }
510///
511/// // Set the flag, and let the thread wake up.
512/// // There is no race condition here: if `unpark`
513/// // happens first, `park` will return immediately.
514/// // There is also no other `park` that could consume this token,
515/// // since we waited until the other thread got queued.
516/// // Hence there is no risk of a deadlock.
517/// FLAG.store(true, Ordering::Release);
518/// println!("Unpark the thread");
519/// parked_thread.thread().unpark();
520///
521/// parked_thread.join().unwrap();
522/// ```
523///
524/// [`Thread`]: super::Thread
525/// [`unpark`]: super::Thread::unpark
526/// [`thread::park_timeout`]: park_timeout
527/// [release sequence]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release_sequence
528#[stable(feature = "rust1", since = "1.0.0")]
529pub fn park() {
530    let guard = PanicGuard;
531    // SAFETY: park_timeout is called on the parker owned by this thread.
532    unsafe {
533        current().park();
534    }
535    // No panic occurred, do not abort.
536    forget(guard);
537}
538
539/// Uses [`park_timeout`].
540///
541/// Blocks unless or until the current thread's token is made available or
542/// the specified duration has been reached (may wake spuriously).
543///
544/// The semantics of this function are equivalent to [`park`] except
545/// that the thread will be blocked for roughly no longer than `dur`. This
546/// method should not be used for precise timing due to anomalies such as
547/// preemption or platform differences that might not cause the maximum
548/// amount of time waited to be precisely `ms` long.
549///
550/// See the [park documentation][`park`] for more detail.
551#[stable(feature = "rust1", since = "1.0.0")]
552#[deprecated(since = "1.6.0", note = "replaced by `std::thread::park_timeout`")]
553pub fn park_timeout_ms(ms: u32) {
554    park_timeout(Duration::from_millis(ms as u64))
555}
556
557/// Blocks unless or until the current thread's token is made available or
558/// the specified duration has been reached (may wake spuriously).
559///
560/// The semantics of this function are equivalent to [`park`][park] except
561/// that the thread will be blocked for roughly no longer than `dur`. This
562/// method should not be used for precise timing due to anomalies such as
563/// preemption or platform differences that might not cause the maximum
564/// amount of time waited to be precisely `dur` long.
565///
566/// See the [park documentation][park] for more details.
567///
568/// # Platform-specific behavior
569///
570/// Platforms which do not support nanosecond precision for sleeping will have
571/// `dur` rounded up to the nearest granularity of time they can sleep for.
572///
573/// # Examples
574///
575/// Waiting for the complete expiration of the timeout:
576///
577/// ```rust,no_run
578/// use std::thread::park_timeout;
579/// use std::time::{Instant, Duration};
580///
581/// let timeout = Duration::from_secs(2);
582/// let beginning_park = Instant::now();
583///
584/// let mut timeout_remaining = timeout;
585/// loop {
586///     park_timeout(timeout_remaining);
587///     let elapsed = beginning_park.elapsed();
588///     if elapsed >= timeout {
589///         break;
590///     }
591///     println!("restarting park_timeout after {elapsed:?}");
592///     timeout_remaining = timeout - elapsed;
593/// }
594/// ```
595#[stable(feature = "park_timeout", since = "1.4.0")]
596pub fn park_timeout(dur: Duration) {
597    let guard = PanicGuard;
598    // SAFETY: park_timeout is called on a handle owned by this thread.
599    unsafe {
600        current().park_timeout(dur);
601    }
602    // No panic occurred, do not abort.
603    forget(guard);
604}
605
606/// Returns an estimate of the default amount of parallelism a program should use.
607///
608/// Parallelism is a resource. A given machine provides a certain capacity for
609/// parallelism, i.e., a bound on the number of computations it can perform
610/// simultaneously. This number often corresponds to the amount of CPUs a
611/// computer has, but it may diverge in various cases.
612///
613/// Host environments such as VMs or container orchestrators may want to
614/// restrict the amount of parallelism made available to programs in them. This
615/// is often done to limit the potential impact of (unintentionally)
616/// resource-intensive programs on other programs running on the same machine.
617///
618/// # Limitations
619///
620/// The purpose of this API is to provide an easy and portable way to query
621/// the default amount of parallelism the program should use. Among other things it
622/// does not expose information on NUMA regions, does not account for
623/// differences in (co)processor capabilities or current system load,
624/// and will not modify the program's global state in order to more accurately
625/// query the amount of available parallelism.
626///
627/// Where both fixed steady-state and burst limits are available the steady-state
628/// capacity will be used to ensure more predictable latencies.
629///
630/// Resource limits can be changed during the runtime of a program, therefore the value is
631/// not cached and instead recomputed every time this function is called. It should not be
632/// called from hot code.
633///
634/// The value returned by this function should be considered a simplified
635/// approximation of the actual amount of parallelism available at any given
636/// time. To get a more detailed or precise overview of the amount of
637/// parallelism available to the program, you may wish to use
638/// platform-specific APIs as well. The following platform limitations currently
639/// apply to `available_parallelism`:
640///
641/// On Windows:
642/// - It may undercount the amount of parallelism available on systems with more
643///   than 64 logical CPUs. However, programs typically need specific support to
644///   take advantage of more than 64 logical CPUs, and in the absence of such
645///   support, the number returned by this function accurately reflects the
646///   number of logical CPUs the program can use by default.
647/// - It may overcount the amount of parallelism available on systems limited by
648///   process-wide affinity masks, or job object limitations.
649///
650/// On Linux:
651/// - It may overcount the amount of parallelism available when limited by a
652///   process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be
653///   queried, e.g. due to sandboxing.
654/// - It may undercount the amount of parallelism if the current thread's affinity mask
655///   does not reflect the process' cpuset, e.g. due to pinned threads.
656/// - If the process is in a cgroup v1 cpu controller, this may need to
657///   scan mountpoints to find the corresponding cgroup v1 controller,
658///   which may take time on systems with large numbers of mountpoints.
659///   (This does not apply to cgroup v2, or to processes not in a
660///   cgroup.)
661/// - It does not attempt to take `ulimit` into account. If there is a limit set on the number of
662///   threads, `available_parallelism` cannot know how much of that limit a Rust program should
663///   take, or know in a reliable and race-free way how much of that limit is already taken.
664///
665/// On all targets:
666/// - It may overcount the amount of parallelism available when running in a VM
667/// with CPU usage limits (e.g. an overcommitted host).
668///
669/// # Errors
670///
671/// This function will, but is not limited to, return errors in the following
672/// cases:
673///
674/// - If the amount of parallelism is not known for the target platform.
675/// - If the program lacks permission to query the amount of parallelism made
676///   available to it.
677///
678/// # Examples
679///
680/// ```
681/// # #![allow(dead_code)]
682/// use std::{io, thread};
683///
684/// fn main() -> io::Result<()> {
685///     let count = thread::available_parallelism()?.get();
686///     assert!(count >= 1_usize);
687///     Ok(())
688/// }
689/// ```
690#[doc(alias = "available_concurrency")] // Alias for a previous name we gave this API on unstable.
691#[doc(alias = "hardware_concurrency")] // Alias for C++ `std::thread::hardware_concurrency`.
692#[doc(alias = "num_cpus")] // Alias for a popular ecosystem crate which provides similar functionality.
693#[stable(feature = "available_parallelism", since = "1.59.0")]
694pub fn available_parallelism() -> io::Result<NonZero<usize>> {
695    imp::available_parallelism()
696}