Expand description
Management of the interaction between the main cargo
and all spawned jobs.
§Overview
This module implements a job queue. A job here represents a unit of work, which is roughly a rustc invocation, a build script run, or just a no-op. The job queue primarily handles the following things:
- Spawns concurrent jobs. Depending on its
Freshness
, a job could be either executed on a spawned thread or ran on the same thread to avoid the threading overhead. - Controls the number of concurrency. It allocates and manages
jobserver
tokens to each spawned off rustc and build scripts. - Manages the communication between the main
cargo
process and its spawned jobs. ThoseMessage
s are sent over aQueue
shared across threads. - Schedules the execution order of each
Job
. Priorities are determined when callingJobQueue::enqueue
to enqueue a job. The scheduling is relatively rudimentary and could likely be improved.
A rough outline of building a queue and executing jobs is:
JobQueue::new
to simply create one queue.JobQueue::enqueue
to add new jobs onto the queue.- Consumes the queue and executes all jobs via
JobQueue::execute
.
The primary loop happens insides JobQueue::execute
, which is effectively
DrainState::drain_the_queue
. DrainState
is, as its name tells,
the running state of the job queue getting drained.
§Jobserver
As of Feb. 2023, Cargo and rustc have a relatively simple jobserver
relationship with each other. They share a single jobserver amongst what
is potentially hundreds of threads of work on many-cored systems.
The jobserver could come from either the environment (e.g., from a make
invocation), or from Cargo creating its own jobserver server if there is no
jobserver to inherit from.
Cargo wants to complete the build as quickly as possible, fully saturating
all cores (as constrained by the -j=N
) parameter. Cargo also must not spawn
more than N threads of work: the total amount of tokens we have floating
around must always be limited to N.
It is not really possible to optimally choose which crate should build
first or last; nor is it possible to decide whether to give an additional
token to rustc first or rather spawn a new crate of work. The algorithm in
Cargo prioritizes spawning as many crates (i.e., rustc processes) as
possible. In short, the jobserver relationship among Cargo and rustc
processes is 1 cargo
to N rustc
. Cargo knows nothing beyond rustc
processes in terms of parallelism1.
We integrate with the jobserver crate, originating from GNU make POSIX jobserver, to make sure that build scripts which use make to build C code can cooperate with us on the number of used tokens and avoid overfilling the system we’re on.
§Scheduling
The current scheduling algorithm is not really polished. It is simply based
on a dependency graph DependencyQueue
. We continue adding nodes onto
the graph until we finalize it. When the graph gets finalized, it finds the
sum of the cost of each dependencies of each node, including transitively.
The sum of dependency cost turns out to be the cost of each given node.
At the time being, the cost is just passed as a fixed placeholder in
JobQueue::enqueue
. In the future, we could explore more possibilities
around it. For instance, we start persisting timing information for each
build somewhere. For a subsequent build, we can look into the historical
data and perform a PGO-like optimization to prioritize jobs, making a build
fully pipelined.
§Message queue
Each spawned thread running a process uses the message queue Queue
to
send messages back to the main thread (the one running cargo
).
The main thread coordinates everything, and handles printing output.
It is important to be careful which messages use push
vs push_bounded
.
push
is for priority messages (like tokens, or “finished”) where the
sender shouldn’t block. We want to handle those so real work can proceed
ASAP.
push_bounded
is only for messages being printed to stdout/stderr. Being
bounded prevents a flood of messages causing a large amount of memory
being used.
push
also avoids blocking which helps avoid deadlocks. For example, when
the diagnostic server thread is dropped, it waits for the thread to exit.
But if the thread is blocked on a full queue, and there is a critical
error, the drop will deadlock. This should be fixed at some point in the
future. The jobserver thread has a similar problem, though it will time
out after 1 second.
To access the message queue, each running Job
is given its own JobState
,
containing everything it needs to communicate with the main thread.
See Message
for all available message kinds.
In fact,
jobserver
that Cargo uses also manages the allocation of tokens to rustc beyond the implicit token each rustc owns (i.e., the ones used for parallel LLVM work and parallel rustc threads). See also “Rust Compiler Development Guide: Parallel Compilation” and this comment in rust-lang/rust. ↩
Re-exports§
pub use self::job::Freshness;
pub use self::job::Freshness::Dirty;
pub use self::job::Freshness::Fresh;
pub use self::job::Job;
pub use self::job::Work;
pub use self::job_state::JobState;
Modules§
Structs§
- Handler for deduplicating diagnostics.
- This structure is backed by the
DependencyQueue
type and manages the actual compilation step of each package. Packages enqueue units of work and then later on the entire graph is processed and compiled. - This structure is backed by the
DependencyQueue
type and manages the queueing of compilation steps for each package. Packages enqueue units of work and then later on the entire graph is converted toDrainState
and executed. - Count of warnings, used to print a summary after the job succeeds