cargo/core/resolver/mod.rs
1//! Resolution of the entire dependency graph for a crate.
2//!
3//! This module implements the core logic in taking the world of crates and
4//! constraints and creating a resolved graph with locked versions for all
5//! crates and their dependencies. This is separate from the registry module
6//! which is more worried about discovering crates from various sources, this
7//! module just uses the Registry trait as a source to learn about crates from.
8//!
9//! Actually solving a constraint graph is an NP-hard problem. This algorithm
10//! is basically a nice heuristic to make sure we get roughly the best answer
11//! most of the time. The constraints that we're working with are:
12//!
13//! 1. Each crate can have any number of dependencies. Each dependency can
14//! declare a version range that it is compatible with.
15//! 2. Crates can be activated with multiple version (e.g., show up in the
16//! dependency graph twice) so long as each pairwise instance have
17//! semver-incompatible versions.
18//!
19//! The algorithm employed here is fairly simple, we simply do a DFS, activating
20//! the "newest crate" (highest version) first and then going to the next
21//! option. The heuristics we employ are:
22//!
23//! * Never try to activate a crate version which is incompatible. This means we
24//! only try crates which will actually satisfy a dependency and we won't ever
25//! try to activate a crate that's semver compatible with something else
26//! activated (as we're only allowed to have one) nor try to activate a crate
27//! that has the same links attribute as something else
28//! activated.
29//! * Always try to activate the highest version crate first. The default
30//! dependency in Cargo (e.g., when you write `foo = "0.1.2"`) is
31//! semver-compatible, so selecting the highest version possible will allow us
32//! to hopefully satisfy as many dependencies at once.
33//!
34//! Beyond that, what's implemented below is just a naive backtracking version
35//! which should in theory try all possible combinations of dependencies and
36//! versions to see if one works. The first resolution that works causes
37//! everything to bail out immediately and return success, and only if *nothing*
38//! works do we actually return an error up the stack.
39//!
40//! Resolution is currently performed twice
41//! 1. With all features enabled (this is what gets saved to `Cargo.lock`)
42//! 2. With only the specific features the user selected on the command-line. Ideally this
43//! run will get removed in the future when transitioning to the new feature resolver.
44//!
45//! A new feature-specific resolver was added in 2020 which adds more sophisticated feature
46//! resolution. It is located in the [`features`] module. The original dependency resolver still
47//! performs feature unification, as it can help reduce the dependencies it has to consider during
48//! resolution (rather than assuming every optional dependency of every package is enabled).
49//! Checking if a feature is enabled must go through the new feature resolver.
50//!
51//! ## Performance
52//!
53//! Note that this is a relatively performance-critical portion of Cargo. The
54//! data that we're processing is proportional to the size of the dependency
55//! graph, which can often be quite large (e.g., take a look at Servo). To make
56//! matters worse the DFS algorithm we're implemented is inherently quite
57//! inefficient. When we add the requirement of backtracking on top it means
58//! that we're implementing something that probably shouldn't be allocating all
59//! over the place.
60
61use std::collections::{BTreeMap, HashMap, HashSet};
62use std::rc::Rc;
63use std::time::{Duration, Instant};
64
65use tracing::{debug, trace};
66
67use crate::core::PackageIdSpec;
68use crate::core::{Dependency, PackageId, Registry, Summary};
69use crate::util::context::GlobalContext;
70use crate::util::errors::CargoResult;
71use crate::util::network::PollExt;
72
73use self::context::ResolverContext;
74use self::dep_cache::RegistryQueryer;
75use self::features::RequestedFeatures;
76use self::types::{ConflictMap, ConflictReason, DepsFrame};
77use self::types::{FeaturesSet, RcVecIter, RemainingDeps, ResolverProgress};
78
79pub use self::encode::Metadata;
80pub use self::encode::{EncodableDependency, EncodablePackageId, EncodableResolve};
81pub use self::errors::{ActivateError, ActivateResult, ResolveError};
82pub use self::features::{CliFeatures, ForceAllTargets, HasDevUnits};
83pub use self::resolve::{Resolve, ResolveVersion};
84pub use self::types::{ResolveBehavior, ResolveOpts};
85pub use self::version_prefs::{VersionOrdering, VersionPreferences};
86
87mod conflict_cache;
88mod context;
89mod dep_cache;
90pub(crate) mod encode;
91pub(crate) mod errors;
92pub mod features;
93mod resolve;
94mod types;
95mod version_prefs;
96
97/// Builds the list of all packages required to build the first argument.
98///
99/// * `summaries` - the list of package summaries along with how to resolve
100/// their features. This is a list of all top-level packages that are intended
101/// to be part of the lock file (resolve output). These typically are a list
102/// of all workspace members.
103///
104/// * `replacements` - this is a list of `[replace]` directives found in the
105/// root of the workspace. The list here is a `PackageIdSpec` of what to
106/// replace and a `Dependency` to replace that with. In general it's not
107/// recommended to use `[replace]` any more and use `[patch]` instead, which
108/// is supported elsewhere.
109///
110/// * `registry` - this is the source from which all package summaries are
111/// loaded. It's expected that this is extensively configured ahead of time
112/// and is idempotent with our requests to it (aka returns the same results
113/// for the same query every time). Typically this is an instance of a
114/// `PackageRegistry`.
115///
116/// * `version_prefs` - this represents a preference for some versions over others,
117/// based on the lock file or other reasons such as `[patch]`es.
118///
119/// * `resolve_version` - this controls how the lockfile will be serialized.
120///
121/// * `config` - a location to print warnings and such, or `None` if no warnings
122/// should be printed
123#[tracing::instrument(skip_all)]
124pub fn resolve(
125 summaries: &[(Summary, ResolveOpts)],
126 replacements: &[(PackageIdSpec, Dependency)],
127 registry: &mut dyn Registry,
128 version_prefs: &VersionPreferences,
129 resolve_version: ResolveVersion,
130 gctx: Option<&GlobalContext>,
131) -> CargoResult<Resolve> {
132 let first_version = match gctx {
133 Some(config) if config.cli_unstable().direct_minimal_versions => {
134 Some(VersionOrdering::MinimumVersionsFirst)
135 }
136 _ => None,
137 };
138 let mut registry = RegistryQueryer::new(registry, replacements, version_prefs);
139
140 // Global cache of the reasons for each time we backtrack.
141 let mut past_conflicting_activations = conflict_cache::ConflictCache::new();
142
143 let resolver_ctx = loop {
144 let resolver_ctx = activate_deps_loop(
145 &mut registry,
146 summaries,
147 first_version,
148 gctx,
149 &mut past_conflicting_activations,
150 )?;
151 if registry.reset_pending() {
152 break resolver_ctx;
153 } else {
154 registry.registry.block_until_ready()?;
155 }
156 };
157
158 let mut cksums = HashMap::new();
159 for (summary, _) in resolver_ctx.activations.values() {
160 let cksum = summary.checksum().map(|s| s.to_string());
161 cksums.insert(summary.package_id(), cksum);
162 }
163 let graph = resolver_ctx.graph();
164 let replacements = resolver_ctx.resolve_replacements(®istry);
165 let features = resolver_ctx
166 .resolve_features
167 .iter()
168 .map(|(k, v)| (*k, v.iter().cloned().collect()))
169 .collect();
170 let summaries = resolver_ctx
171 .activations
172 .into_iter()
173 .map(|(_key, (summary, _age))| (summary.package_id(), summary))
174 .collect();
175 let resolve = Resolve::new(
176 graph,
177 replacements,
178 features,
179 cksums,
180 BTreeMap::new(),
181 Vec::new(),
182 resolve_version,
183 summaries,
184 );
185
186 check_cycles(&resolve)?;
187 check_duplicate_pkgs_in_lockfile(&resolve)?;
188 trace!("resolved: {:?}", resolve);
189
190 Ok(resolve)
191}
192
193/// Recursively activates the dependencies for `summaries`, in depth-first order,
194/// backtracking across possible candidates for each dependency as necessary.
195///
196/// If all dependencies can be activated and resolved to a version in the
197/// dependency graph, `cx` is returned.
198fn activate_deps_loop(
199 registry: &mut RegistryQueryer<'_>,
200 summaries: &[(Summary, ResolveOpts)],
201 first_version: Option<VersionOrdering>,
202 gctx: Option<&GlobalContext>,
203 past_conflicting_activations: &mut conflict_cache::ConflictCache,
204) -> CargoResult<ResolverContext> {
205 let mut resolver_ctx = ResolverContext::new();
206 let mut backtrack_stack = Vec::new();
207 let mut remaining_deps = RemainingDeps::new();
208
209 // Activate all the initial summaries to kick off some work.
210 for (summary, opts) in summaries {
211 debug!("initial activation: {}", summary.package_id());
212 let res = activate(
213 &mut resolver_ctx,
214 registry,
215 None,
216 summary.clone(),
217 first_version,
218 opts,
219 );
220 match res {
221 Ok(Some((frame, _))) => remaining_deps.push(frame),
222 Ok(None) => (),
223 Err(ActivateError::Fatal(e)) => return Err(e),
224 Err(ActivateError::Conflict(_, _)) => panic!("bad error from activate"),
225 }
226 }
227
228 let mut printed = ResolverProgress::new();
229
230 // Main resolution loop, this is the workhorse of the resolution algorithm.
231 //
232 // You'll note that a few stacks are maintained on the side, which might
233 // seem odd when this algorithm looks like it could be implemented
234 // recursively. While correct, this is implemented iteratively to avoid
235 // blowing the stack (the recursion depth is proportional to the size of the
236 // input).
237 //
238 // The general sketch of this loop is to run until there are no dependencies
239 // left to activate, and for each dependency to attempt to activate all of
240 // its own dependencies in turn. The `backtrack_stack` is a side table of
241 // backtracking states where if we hit an error we can return to in order to
242 // attempt to continue resolving.
243 while let Some((just_here_for_the_error_messages, frame)) =
244 remaining_deps.pop_most_constrained()
245 {
246 let (mut parent, (mut dep, candidates, mut features)) = frame;
247
248 // If we spend a lot of time here (we shouldn't in most cases) then give
249 // a bit of a visual indicator as to what we're doing.
250 printed.shell_status(gctx)?;
251
252 trace!(
253 "{}[{}]>{} {} candidates",
254 parent.name(),
255 resolver_ctx.age,
256 dep.package_name(),
257 candidates.len()
258 );
259
260 let just_here_for_the_error_messages = just_here_for_the_error_messages
261 && past_conflicting_activations
262 .conflicting(&resolver_ctx, &dep)
263 .is_some();
264
265 let mut remaining_candidates = RemainingCandidates::new(&candidates);
266
267 // `conflicting_activations` stores all the reasons we were unable to
268 // activate candidates. One of these reasons will have to go away for
269 // backtracking to find a place to restart. It is also the list of
270 // things to explain in the error message if we fail to resolve.
271 //
272 // This is a map of package ID to a reason why that packaged caused a
273 // conflict for us.
274 let mut conflicting_activations = ConflictMap::new();
275
276 // When backtracking we don't fully update `conflicting_activations`
277 // especially for the cases that we didn't make a backtrack frame in the
278 // first place. This `backtracked` var stores whether we are continuing
279 // from a restored backtrack frame so that we can skip caching
280 // `conflicting_activations` in `past_conflicting_activations`
281 let mut backtracked = false;
282
283 loop {
284 let next = remaining_candidates.next(&mut conflicting_activations, &resolver_ctx);
285
286 let (candidate, has_another) = next.ok_or(()).or_else(|_| {
287 // If we get here then our `remaining_candidates` was just
288 // exhausted, so `dep` failed to activate.
289 //
290 // It's our job here to backtrack, if possible, and find a
291 // different candidate to activate. If we can't find any
292 // candidates whatsoever then it's time to bail entirely.
293 trace!(
294 "{}[{}]>{} -- no candidates",
295 parent.name(),
296 resolver_ctx.age,
297 dep.package_name()
298 );
299
300 // Use our list of `conflicting_activations` to add to our
301 // global list of past conflicting activations, effectively
302 // globally poisoning `dep` if `conflicting_activations` ever
303 // shows up again. We'll use the `past_conflicting_activations`
304 // below to determine if a dependency is poisoned and skip as
305 // much work as possible.
306 //
307 // If we're only here for the error messages then there's no
308 // need to try this as this dependency is already known to be
309 // bad.
310 //
311 // As we mentioned above with the `backtracked` variable if this
312 // local is set to `true` then our `conflicting_activations` may
313 // not be right, so we can't push into our global cache.
314 let mut generalize_conflicting_activations = None;
315 if !just_here_for_the_error_messages && !backtracked {
316 past_conflicting_activations.insert(&dep, &conflicting_activations);
317 if let Some(c) = generalize_conflicting(
318 &resolver_ctx,
319 registry,
320 past_conflicting_activations,
321 &parent,
322 &dep,
323 &conflicting_activations,
324 ) {
325 generalize_conflicting_activations = Some(c);
326 }
327 }
328
329 match find_candidate(
330 &resolver_ctx,
331 &mut backtrack_stack,
332 &parent,
333 backtracked,
334 generalize_conflicting_activations
335 .as_ref()
336 .unwrap_or(&conflicting_activations),
337 ) {
338 Some((candidate, has_another, frame)) => {
339 // Reset all of our local variables used with the
340 // contents of `frame` to complete our backtrack.
341 resolver_ctx = frame.context;
342 remaining_deps = frame.remaining_deps;
343 remaining_candidates = frame.remaining_candidates;
344 parent = frame.parent;
345 dep = frame.dep;
346 features = frame.features;
347 conflicting_activations = frame.conflicting_activations;
348 backtracked = true;
349 Ok((candidate, has_another))
350 }
351 None => {
352 debug!("no candidates found");
353 Err(errors::activation_error(
354 &resolver_ctx,
355 registry.registry,
356 &parent,
357 &dep,
358 &conflicting_activations,
359 &candidates,
360 gctx,
361 ))
362 }
363 }
364 })?;
365
366 // If we're only here for the error messages then we know that this
367 // activation will fail one way or another. To that end if we've got
368 // more candidates we want to fast-forward to the last one as
369 // otherwise we'll just backtrack here anyway (helping us to skip
370 // some work).
371 if just_here_for_the_error_messages && !backtracked && has_another {
372 continue;
373 }
374
375 // We have a `candidate`. Create a `BacktrackFrame` so we can add it
376 // to the `backtrack_stack` later if activation succeeds.
377 //
378 // Note that if we don't actually have another candidate then there
379 // will be nothing to backtrack to so we skip construction of the
380 // frame. This is a relatively important optimization as a number of
381 // the `clone` calls below can be quite expensive, so we avoid them
382 // if we can.
383 let backtrack = if has_another {
384 Some(BacktrackFrame {
385 context: ResolverContext::clone(&resolver_ctx),
386 remaining_deps: remaining_deps.clone(),
387 remaining_candidates: remaining_candidates.clone(),
388 parent: Summary::clone(&parent),
389 dep: Dependency::clone(&dep),
390 features: Rc::clone(&features),
391 conflicting_activations: conflicting_activations.clone(),
392 })
393 } else {
394 None
395 };
396
397 let pid = candidate.package_id();
398 let opts = ResolveOpts {
399 dev_deps: false,
400 features: RequestedFeatures::DepFeatures {
401 features: Rc::clone(&features),
402 uses_default_features: dep.uses_default_features(),
403 },
404 };
405 trace!(
406 "{}[{}]>{} trying {}",
407 parent.name(),
408 resolver_ctx.age,
409 dep.package_name(),
410 candidate.version()
411 );
412 let first_version = None; // this is an indirect dependency
413 let res = activate(
414 &mut resolver_ctx,
415 registry,
416 Some((&parent, &dep)),
417 candidate,
418 first_version,
419 &opts,
420 );
421
422 let successfully_activated = match res {
423 // Success! We've now activated our `candidate` in our context
424 // and we're almost ready to move on. We may want to scrap this
425 // frame in the end if it looks like it's not going to end well,
426 // so figure that out here.
427 Ok(Some((mut frame, dur))) => {
428 printed.elapsed(dur);
429
430 // Our `frame` here is a new package with its own list of
431 // dependencies. Do a sanity check here of all those
432 // dependencies by cross-referencing our global
433 // `past_conflicting_activations`. Recall that map is a
434 // global cache which lists sets of packages where, when
435 // activated, the dependency is unresolvable.
436 //
437 // If any our frame's dependencies fit in that bucket,
438 // aka known unresolvable, then we extend our own set of
439 // conflicting activations with theirs. We can do this
440 // because the set of conflicts we found implies the
441 // dependency can't be activated which implies that we
442 // ourselves can't be activated, so we know that they
443 // conflict with us.
444 let mut has_past_conflicting_dep = just_here_for_the_error_messages;
445 if !has_past_conflicting_dep {
446 if let Some(conflicting) =
447 frame
448 .remaining_siblings
449 .remaining()
450 .find_map(|(ref new_dep, _, _)| {
451 past_conflicting_activations.conflicting(&resolver_ctx, new_dep)
452 })
453 {
454 // If one of our deps is known unresolvable
455 // then we will not succeed.
456 // How ever if we are part of the reason that
457 // one of our deps conflicts then
458 // we can make a stronger statement
459 // because we will definitely be activated when
460 // we try our dep.
461 conflicting_activations.extend(
462 conflicting
463 .iter()
464 .filter(|&(p, _)| p != &pid)
465 .map(|(&p, r)| (p, r.clone())),
466 );
467
468 has_past_conflicting_dep = true;
469 }
470 }
471 // If any of `remaining_deps` are known unresolvable with
472 // us activated, then we extend our own set of
473 // conflicting activations with theirs and its parent. We can do this
474 // because the set of conflicts we found implies the
475 // dependency can't be activated which implies that we
476 // ourselves are incompatible with that dep, so we know that deps
477 // parent conflict with us.
478 if !has_past_conflicting_dep {
479 if let Some(known_related_bad_deps) =
480 past_conflicting_activations.dependencies_conflicting_with(pid)
481 {
482 if let Some((other_parent, conflict)) = remaining_deps
483 .iter()
484 // for deps related to us
485 .filter(|(_, other_dep)| known_related_bad_deps.contains(other_dep))
486 .filter_map(|(other_parent, other_dep)| {
487 past_conflicting_activations
488 .find_conflicting(&resolver_ctx, &other_dep, Some(pid))
489 .map(|con| (other_parent, con))
490 })
491 .next()
492 {
493 let rel = conflict.get(&pid).unwrap().clone();
494
495 // The conflict we found is
496 // "other dep will not succeed if we are activated."
497 // We want to add
498 // "our dep will not succeed if other dep is in remaining_deps"
499 // but that is not how the cache is set up.
500 // So we add the less general but much faster,
501 // "our dep will not succeed if other dep's parent is activated".
502 conflicting_activations.extend(
503 conflict
504 .iter()
505 .filter(|&(p, _)| p != &pid)
506 .map(|(&p, r)| (p, r.clone())),
507 );
508 conflicting_activations.insert(other_parent, rel);
509 has_past_conflicting_dep = true;
510 }
511 }
512 }
513
514 // Ok if we're in a "known failure" state for this frame we
515 // may want to skip it altogether though. We don't want to
516 // skip it though in the case that we're displaying error
517 // messages to the user!
518 //
519 // Here we need to figure out if the user will see if we
520 // skipped this candidate (if it's known to fail, aka has a
521 // conflicting dep and we're the last candidate). If we're
522 // here for the error messages, we can't skip it (but we can
523 // prune extra work). If we don't have any candidates in our
524 // backtrack stack then we're the last line of defense, so
525 // we'll want to present an error message for sure.
526 let activate_for_error_message = has_past_conflicting_dep && !has_another && {
527 just_here_for_the_error_messages || {
528 find_candidate(
529 &resolver_ctx,
530 &mut backtrack_stack.clone(),
531 &parent,
532 backtracked,
533 &conflicting_activations,
534 )
535 .is_none()
536 }
537 };
538
539 // If we're only here for the error messages then we know
540 // one of our candidate deps will fail, meaning we will
541 // fail and that none of the backtrack frames will find a
542 // candidate that will help. Consequently let's clean up the
543 // no longer needed backtrack frames.
544 if activate_for_error_message {
545 backtrack_stack.clear();
546 }
547
548 // If we don't know for a fact that we'll fail or if we're
549 // just here for the error message then we push this frame
550 // onto our list of to-be-resolve, which will generate more
551 // work for us later on.
552 //
553 // Otherwise we're guaranteed to fail and were not here for
554 // error messages, so we skip work and don't push anything
555 // onto our stack.
556 frame.just_for_error_messages = has_past_conflicting_dep;
557 if !has_past_conflicting_dep || activate_for_error_message {
558 remaining_deps.push(frame);
559 true
560 } else {
561 trace!(
562 "{}[{}]>{} skipping {} ",
563 parent.name(),
564 resolver_ctx.age,
565 dep.package_name(),
566 pid.version()
567 );
568 false
569 }
570 }
571
572 // This candidate's already activated, so there's no extra work
573 // for us to do. Let's keep going.
574 Ok(None) => true,
575
576 // We failed with a super fatal error (like a network error), so
577 // bail out as quickly as possible as we can't reliably
578 // backtrack from errors like these
579 Err(ActivateError::Fatal(e)) => return Err(e),
580
581 // We failed due to a bland conflict, bah! Record this in our
582 // frame's list of conflicting activations as to why this
583 // candidate failed, and then move on.
584 Err(ActivateError::Conflict(id, reason)) => {
585 conflicting_activations.insert(id, reason);
586 false
587 }
588 };
589
590 // If we've successfully activated then save off the backtrack frame
591 // if one was created, and otherwise break out of the inner
592 // activation loop as we're ready to move to the next dependency
593 if successfully_activated {
594 backtrack_stack.extend(backtrack);
595 break;
596 }
597
598 // We've failed to activate this dependency, oh dear! Our call to
599 // `activate` above may have altered our `cx` local variable, so
600 // restore it back if we've got a backtrack frame.
601 //
602 // If we don't have a backtrack frame then we're just using the `cx`
603 // for error messages anyway so we can live with a little
604 // imprecision.
605 if let Some(b) = backtrack {
606 resolver_ctx = b.context;
607 }
608 }
609
610 // Ok phew, that loop was a big one! If we've broken out then we've
611 // successfully activated a candidate. Our stacks are all in place that
612 // we're ready to move on to the next dependency that needs activation,
613 // so loop back to the top of the function here.
614 }
615
616 Ok(resolver_ctx)
617}
618
619/// Attempts to activate the summary `candidate` in the context `cx`.
620///
621/// This function will pull dependency summaries from the registry provided, and
622/// the dependencies of the package will be determined by the `opts` provided.
623/// If `candidate` was activated, this function returns the dependency frame to
624/// iterate through next.
625fn activate(
626 cx: &mut ResolverContext,
627 registry: &mut RegistryQueryer<'_>,
628 parent: Option<(&Summary, &Dependency)>,
629 candidate: Summary,
630 first_version: Option<VersionOrdering>,
631 opts: &ResolveOpts,
632) -> ActivateResult<Option<(DepsFrame, Duration)>> {
633 let candidate_pid = candidate.package_id();
634 cx.age += 1;
635 if let Some((parent, dep)) = parent {
636 let parent_pid = parent.package_id();
637 // add an edge from candidate to parent in the parents graph
638 cx.parents
639 .link(candidate_pid, parent_pid)
640 // and associate dep with that edge
641 .insert(dep.clone());
642 }
643
644 let activated = cx.flag_activated(&candidate, opts, parent)?;
645
646 let candidate = match registry.replacement_summary(candidate_pid) {
647 Some(replace) => {
648 // Note the `None` for parent here since `[replace]` is a bit wonky
649 // and doesn't activate the same things that `[patch]` typically
650 // does. TBH it basically cause panics in the test suite if
651 // `parent` is passed through here and `[replace]` is otherwise
652 // on life support so it's not critical to fix bugs anyway per se.
653 if cx.flag_activated(replace, opts, None)? && activated {
654 return Ok(None);
655 }
656 trace!(
657 "activating {} (replacing {})",
658 replace.package_id(),
659 candidate_pid
660 );
661 replace.clone()
662 }
663 None => {
664 if activated {
665 return Ok(None);
666 }
667 trace!("activating {}", candidate_pid);
668 candidate
669 }
670 };
671
672 let now = Instant::now();
673 let (used_features, deps) = &*registry.build_deps(
674 cx,
675 parent.map(|p| p.0.package_id()),
676 &candidate,
677 opts,
678 first_version,
679 )?;
680
681 // Record what list of features is active for this package.
682 if !used_features.is_empty() {
683 Rc::make_mut(
684 cx.resolve_features
685 .entry(candidate.package_id())
686 .or_default(),
687 )
688 .extend(used_features);
689 }
690
691 let frame = DepsFrame {
692 parent: candidate,
693 just_for_error_messages: false,
694 remaining_siblings: RcVecIter::new(Rc::clone(deps)),
695 };
696 Ok(Some((frame, now.elapsed())))
697}
698
699#[derive(Clone)]
700struct BacktrackFrame {
701 context: ResolverContext,
702 remaining_deps: RemainingDeps,
703 remaining_candidates: RemainingCandidates,
704 parent: Summary,
705 dep: Dependency,
706 features: FeaturesSet,
707 conflicting_activations: ConflictMap,
708}
709
710/// A helper "iterator" used to extract candidates within a current `Context` of
711/// a dependency graph.
712///
713/// This struct doesn't literally implement the `Iterator` trait (requires a few
714/// more inputs) but in general acts like one. Each `RemainingCandidates` is
715/// created with a list of candidates to choose from. When attempting to iterate
716/// over the list of candidates only *valid* candidates are returned. Validity
717/// is defined within a `Context`.
718///
719/// Candidates passed to `new` may not be returned from `next` as they could be
720/// filtered out, and as they are filtered the causes will be added to `conflicting_prev_active`.
721#[derive(Clone)]
722struct RemainingCandidates {
723 remaining: RcVecIter<Summary>,
724 // This is an inlined peekable generator
725 has_another: Option<Summary>,
726}
727
728impl RemainingCandidates {
729 fn new(candidates: &Rc<Vec<Summary>>) -> RemainingCandidates {
730 RemainingCandidates {
731 remaining: RcVecIter::new(Rc::clone(candidates)),
732 has_another: None,
733 }
734 }
735
736 /// Attempts to find another candidate to check from this list.
737 ///
738 /// This method will attempt to move this iterator forward, returning a
739 /// candidate that's possible to activate. The `cx` argument is the current
740 /// context which determines validity for candidates returned, and the `dep`
741 /// is the dependency listing that we're activating for.
742 ///
743 /// If successful a `(Candidate, bool)` pair will be returned. The
744 /// `Candidate` is the candidate to attempt to activate, and the `bool` is
745 /// an indicator of whether there are remaining candidates to try of if
746 /// we've reached the end of iteration.
747 ///
748 /// If we've reached the end of the iterator here then `Err` will be
749 /// returned. The error will contain a map of package ID to conflict reason,
750 /// where each package ID caused a candidate to be filtered out from the
751 /// original list for the reason listed.
752 fn next(
753 &mut self,
754 conflicting_prev_active: &mut ConflictMap,
755 cx: &ResolverContext,
756 ) -> Option<(Summary, bool)> {
757 for b in self.remaining.iter() {
758 let b_id = b.package_id();
759
760 // The condition for being a valid candidate relies on
761 // semver. Cargo dictates that you can't duplicate multiple
762 // semver-compatible versions of a crate. For example we can't
763 // simultaneously activate `foo 1.0.2` and `foo 1.2.0`. We can,
764 // however, activate `1.0.2` and `2.0.0`.
765 //
766 // Here we throw out our candidate if it's *compatible*, yet not
767 // equal, to all previously activated versions.
768 if let Some((a, _)) = cx.activations.get(&b_id.as_activations_key()) {
769 if a != b {
770 conflicting_prev_active
771 .entry(a.package_id())
772 .or_insert(ConflictReason::Semver);
773 continue;
774 }
775 }
776
777 // Otherwise the `links` key in the manifest dictates that there's only one
778 // package in a dependency graph, globally, with that particular
779 // `links` key. If this candidate links to something that's already
780 // linked to by a different package then we've gotta skip this.
781 if let Some(link) = b.links() {
782 if let Some(&a) = cx.links.get(&link) {
783 if a != b_id {
784 conflicting_prev_active
785 .entry(a)
786 .or_insert_with(|| ConflictReason::Links(link));
787 continue;
788 }
789 }
790 }
791
792 // Well if we made it this far then we've got a valid dependency. We
793 // want this iterator to be inherently "peekable" so we don't
794 // necessarily return the item just yet. Instead we stash it away to
795 // get returned later, and if we replaced something then that was
796 // actually the candidate to try first so we return that.
797 if let Some(r) = self.has_another.replace(b.clone()) {
798 return Some((r, true));
799 }
800 }
801
802 // Alright we've entirely exhausted our list of candidates. If we've got
803 // something stashed away return that here (also indicating that there's
804 // nothing else).
805 self.has_another.take().map(|r| (r, false))
806 }
807}
808
809/// Attempts to find a new conflict that allows a `find_candidate` better then the input one.
810/// It will add the new conflict to the cache if one is found.
811fn generalize_conflicting(
812 cx: &ResolverContext,
813 registry: &mut RegistryQueryer<'_>,
814 past_conflicting_activations: &mut conflict_cache::ConflictCache,
815 parent: &Summary,
816 dep: &Dependency,
817 conflicting_activations: &ConflictMap,
818) -> Option<ConflictMap> {
819 // We need to determine the `ContextAge` that this `conflicting_activations` will jump to, and why.
820 let (backtrack_critical_age, backtrack_critical_id) = shortcircuit_max(
821 conflicting_activations
822 .keys()
823 .map(|&c| cx.is_active(c).map(|a| (a, c))),
824 )?;
825 let backtrack_critical_reason: ConflictReason =
826 conflicting_activations[&backtrack_critical_id].clone();
827
828 if cx
829 .parents
830 .is_path_from_to(&parent.package_id(), &backtrack_critical_id)
831 {
832 // We are a descendant of the trigger of the problem.
833 // The best generalization of this is to let things bubble up
834 // and let `backtrack_critical_id` figure this out.
835 return None;
836 }
837 // What parents does that critical activation have
838 for (critical_parent, critical_parents_deps) in
839 cx.parents.edges(&backtrack_critical_id).filter(|(p, _)| {
840 // it will only help backjump further if it is older then the critical_age
841 cx.is_active(**p).expect("parent not currently active!?") < backtrack_critical_age
842 })
843 {
844 for critical_parents_dep in critical_parents_deps.iter() {
845 // We only want `first_version.is_some()` for direct dependencies of workspace
846 // members which isn't the case here as this has a `parent`
847 let first_version = None;
848 // A dep is equivalent to one of the things it can resolve to.
849 // Thus, if all the things it can resolve to have already ben determined
850 // to be conflicting, then we can just say that we conflict with the parent.
851 if let Some(others) = registry
852 .query(critical_parents_dep, first_version)
853 .expect("an already used dep now error!?")
854 .expect("an already used dep now pending!?")
855 .iter()
856 .rev() // the last one to be tried is the least likely to be in the cache, so start with that.
857 .map(|other| {
858 past_conflicting_activations
859 .find(
860 dep,
861 &|id| {
862 if id == other.package_id() {
863 // we are imagining that we used other instead
864 Some(backtrack_critical_age)
865 } else {
866 cx.is_active(id)
867 }
868 },
869 Some(other.package_id()),
870 // we only care about things that are newer then critical_age
871 backtrack_critical_age,
872 )
873 .map(|con| (other.package_id(), con))
874 })
875 .collect::<Option<Vec<(PackageId, &ConflictMap)>>>()
876 {
877 let mut con = conflicting_activations.clone();
878 // It is always valid to combine previously inserted conflicts.
879 // A, B are both known bad states each that can never be activated.
880 // A + B is redundant but can't be activated, as if
881 // A + B is active then A is active and we know that is not ok.
882 for (_, other) in &others {
883 con.extend(other.iter().map(|(&id, re)| (id, re.clone())));
884 }
885 // Now that we have this combined conflict, we can do a substitution:
886 // A dep is equivalent to one of the things it can resolve to.
887 // So we can remove all the things that it resolves to and replace with the parent.
888 for (other_id, _) in &others {
889 con.remove(other_id);
890 }
891 con.insert(*critical_parent, backtrack_critical_reason);
892
893 if cfg!(debug_assertions) {
894 // the entire point is to find an older conflict, so let's make sure we did
895 let new_age = con
896 .keys()
897 .map(|&c| cx.is_active(c).expect("not currently active!?"))
898 .max()
899 .unwrap();
900 assert!(
901 new_age < backtrack_critical_age,
902 "new_age {} < backtrack_critical_age {}",
903 new_age,
904 backtrack_critical_age
905 );
906 }
907 past_conflicting_activations.insert(dep, &con);
908 return Some(con);
909 }
910 }
911 }
912 None
913}
914
915/// Returns Some of the largest item in the iterator.
916/// Returns None if any of the items are None or the iterator is empty.
917fn shortcircuit_max<I: Ord>(iter: impl Iterator<Item = Option<I>>) -> Option<I> {
918 let mut out = None;
919 for i in iter {
920 if i.is_none() {
921 return None;
922 }
923 out = std::cmp::max(out, i);
924 }
925 out
926}
927
928/// Looks through the states in `backtrack_stack` for dependencies with
929/// remaining candidates. For each one, also checks if rolling back
930/// could change the outcome of the failed resolution that caused backtracking
931/// in the first place. Namely, if we've backtracked past the parent of the
932/// failed dep, or any of the packages flagged as giving us trouble in
933/// `conflicting_activations`.
934///
935/// Read <https://github.com/rust-lang/cargo/pull/4834>
936/// For several more detailed explanations of the logic here.
937fn find_candidate(
938 cx: &ResolverContext,
939 backtrack_stack: &mut Vec<BacktrackFrame>,
940 parent: &Summary,
941 backtracked: bool,
942 conflicting_activations: &ConflictMap,
943) -> Option<(Summary, bool, BacktrackFrame)> {
944 // When we're calling this method we know that `parent` failed to
945 // activate. That means that some dependency failed to get resolved for
946 // whatever reason. Normally, that means that all of those reasons
947 // (plus maybe some extras) are listed in `conflicting_activations`.
948 //
949 // The abnormal situations are things that do not put all of the reasons in `conflicting_activations`:
950 // If we backtracked we do not know how our `conflicting_activations` related to
951 // the cause of that backtrack, so we do not update it.
952 let age = if !backtracked {
953 // we don't have abnormal situations. So we can ask `cx` for how far back we need to go.
954 // If the `conflicting_activations` does not apply to `cx`,
955 // we will just fall back to laboriously trying all possibilities witch
956 // will give us the correct answer.
957 cx.is_conflicting(Some(parent.package_id()), conflicting_activations)
958 } else {
959 None
960 };
961 let mut new_frame = None;
962 if let Some(age) = age {
963 while let Some(frame) = backtrack_stack.pop() {
964 // If all members of `conflicting_activations` are still
965 // active in this back up we know that we're guaranteed to not actually
966 // make any progress. As a result if we hit this condition we can
967 // completely skip this backtrack frame and move on to the next.
968
969 // Above we use `cx` to determine if this is going to be conflicting.
970 // But lets just double check if the `pop`ed frame agrees.
971 let frame_too_new = frame.context.age >= age;
972 debug_assert!(
973 frame
974 .context
975 .is_conflicting(Some(parent.package_id()), conflicting_activations)
976 == frame_too_new.then_some(age)
977 );
978
979 if !frame_too_new {
980 new_frame = Some(frame);
981 break;
982 }
983 trace!(
984 "{} = \"{}\" skip as not solving {}: {:?}",
985 frame.dep.package_name(),
986 frame.dep.version_req(),
987 parent.package_id(),
988 conflicting_activations
989 );
990 }
991 } else {
992 // If we're here then we are in abnormal situations and need to just go one frame at a time.
993 new_frame = backtrack_stack.pop();
994 }
995
996 new_frame.map(|mut frame| {
997 let (candidate, has_another) = frame
998 .remaining_candidates
999 .next(&mut frame.conflicting_activations, &frame.context)
1000 .expect("why did we save a frame that has no next?");
1001 (candidate, has_another, frame)
1002 })
1003}
1004
1005fn check_cycles(resolve: &Resolve) -> CargoResult<()> {
1006 // Perform a simple cycle check by visiting all nodes.
1007 // We visit each node at most once and we keep
1008 // track of the path through the graph as we walk it. If we walk onto the
1009 // same node twice that's a cycle.
1010 let mut checked = HashSet::with_capacity(resolve.len());
1011 let mut path = Vec::with_capacity(4);
1012 let mut visited = HashSet::with_capacity(4);
1013 for pkg in resolve.iter() {
1014 if !checked.contains(&pkg) {
1015 visit(&resolve, pkg, &mut visited, &mut path, &mut checked)?
1016 }
1017 }
1018 return Ok(());
1019
1020 fn visit(
1021 resolve: &Resolve,
1022 id: PackageId,
1023 visited: &mut HashSet<PackageId>,
1024 path: &mut Vec<PackageId>,
1025 checked: &mut HashSet<PackageId>,
1026 ) -> CargoResult<()> {
1027 if !visited.insert(id) {
1028 // We found a cycle and need to construct an error. Performance is no longer top priority.
1029 let iter = path.iter().rev().scan(id, |child, parent| {
1030 let dep = resolve.transitive_deps_not_replaced(*parent).find_map(
1031 |(dep_id, transitive_dep)| {
1032 (*child == dep_id || Some(*child) == resolve.replacement(dep_id))
1033 .then_some(transitive_dep)
1034 },
1035 );
1036 *child = *parent;
1037 Some((parent, dep))
1038 });
1039 let iter = std::iter::once((&id, None)).chain(iter);
1040 let describe_path = errors::describe_path(iter);
1041 anyhow::bail!(
1042 "cyclic package dependency: package `{id}` depends on itself. Cycle:\n{describe_path}"
1043 );
1044 }
1045
1046 if checked.insert(id) {
1047 path.push(id);
1048 for (dep_id, _transitive_dep) in resolve.transitive_deps_not_replaced(id) {
1049 visit(resolve, dep_id, visited, path, checked)?;
1050 if let Some(replace_id) = resolve.replacement(dep_id) {
1051 visit(resolve, replace_id, visited, path, checked)?;
1052 }
1053 }
1054 path.pop();
1055 }
1056
1057 visited.remove(&id);
1058 Ok(())
1059 }
1060}
1061
1062/// Checks that packages are unique when written to lock file.
1063///
1064/// When writing package ID's to lock file, we apply lossy encoding. In
1065/// particular, we don't store paths of path dependencies. That means that
1066/// *different* packages may collide in the lock file, hence this check.
1067fn check_duplicate_pkgs_in_lockfile(resolve: &Resolve) -> CargoResult<()> {
1068 let mut unique_pkg_ids = HashMap::new();
1069 let state = encode::EncodeState::new(resolve);
1070 for pkg_id in resolve.iter() {
1071 let encodable_pkd_id = encode::encodable_package_id(pkg_id, &state, resolve.version());
1072 if let Some(prev_pkg_id) = unique_pkg_ids.insert(encodable_pkd_id, pkg_id) {
1073 anyhow::bail!(
1074 "package collision in the lockfile: packages {} and {} are different, \
1075 but only one can be written to lockfile unambiguously",
1076 prev_pkg_id,
1077 pkg_id
1078 )
1079 }
1080 }
1081 Ok(())
1082}