rustc_monomorphize/
collector.rs

1//! Mono Item Collection
2//! ====================
3//!
4//! This module is responsible for discovering all items that will contribute
5//! to code generation of the crate. The important part here is that it not only
6//! needs to find syntax-level items (functions, structs, etc) but also all
7//! their monomorphized instantiations. Every non-generic, non-const function
8//! maps to one LLVM artifact. Every generic function can produce
9//! from zero to N artifacts, depending on the sets of type arguments it
10//! is instantiated with.
11//! This also applies to generic items from other crates: A generic definition
12//! in crate X might produce monomorphizations that are compiled into crate Y.
13//! We also have to collect these here.
14//!
15//! The following kinds of "mono items" are handled here:
16//!
17//! - Functions
18//! - Methods
19//! - Closures
20//! - Statics
21//! - Drop glue
22//!
23//! The following things also result in LLVM artifacts, but are not collected
24//! here, since we instantiate them locally on demand when needed in a given
25//! codegen unit:
26//!
27//! - Constants
28//! - VTables
29//! - Object Shims
30//!
31//! The main entry point is `collect_crate_mono_items`, at the bottom of this file.
32//!
33//! General Algorithm
34//! -----------------
35//! Let's define some terms first:
36//!
37//! - A "mono item" is something that results in a function or global in
38//!   the LLVM IR of a codegen unit. Mono items do not stand on their
39//!   own, they can use other mono items. For example, if function
40//!   `foo()` calls function `bar()` then the mono item for `foo()`
41//!   uses the mono item for function `bar()`. In general, the
42//!   definition for mono item A using a mono item B is that
43//!   the LLVM artifact produced for A uses the LLVM artifact produced
44//!   for B.
45//!
46//! - Mono items and the uses between them form a directed graph,
47//!   where the mono items are the nodes and uses form the edges.
48//!   Let's call this graph the "mono item graph".
49//!
50//! - The mono item graph for a program contains all mono items
51//!   that are needed in order to produce the complete LLVM IR of the program.
52//!
53//! The purpose of the algorithm implemented in this module is to build the
54//! mono item graph for the current crate. It runs in two phases:
55//!
56//! 1. Discover the roots of the graph by traversing the HIR of the crate.
57//! 2. Starting from the roots, find uses by inspecting the MIR
58//!    representation of the item corresponding to a given node, until no more
59//!    new nodes are found.
60//!
61//! ### Discovering roots
62//! The roots of the mono item graph correspond to the public non-generic
63//! syntactic items in the source code. We find them by walking the HIR of the
64//! crate, and whenever we hit upon a public function, method, or static item,
65//! we create a mono item consisting of the items DefId and, since we only
66//! consider non-generic items, an empty type-parameters set. (In eager
67//! collection mode, during incremental compilation, all non-generic functions
68//! are considered as roots, as well as when the `-Clink-dead-code` option is
69//! specified. Functions marked `#[no_mangle]` and functions called by inlinable
70//! functions also always act as roots.)
71//!
72//! ### Finding uses
73//! Given a mono item node, we can discover uses by inspecting its MIR. We walk
74//! the MIR to find other mono items used by each mono item. Since the mono
75//! item we are currently at is always monomorphic, we also know the concrete
76//! type arguments of its used mono items. The specific forms a use can take in
77//! MIR are quite diverse. Here is an overview:
78//!
79//! #### Calling Functions/Methods
80//! The most obvious way for one mono item to use another is a
81//! function or method call (represented by a CALL terminator in MIR). But
82//! calls are not the only thing that might introduce a use between two
83//! function mono items, and as we will see below, they are just a
84//! specialization of the form described next, and consequently will not get any
85//! special treatment in the algorithm.
86//!
87//! #### Taking a reference to a function or method
88//! A function does not need to actually be called in order to be used by
89//! another function. It suffices to just take a reference in order to introduce
90//! an edge. Consider the following example:
91//!
92//! ```
93//! # use core::fmt::Display;
94//! fn print_val<T: Display>(x: T) {
95//!     println!("{}", x);
96//! }
97//!
98//! fn call_fn(f: &dyn Fn(i32), x: i32) {
99//!     f(x);
100//! }
101//!
102//! fn main() {
103//!     let print_i32 = print_val::<i32>;
104//!     call_fn(&print_i32, 0);
105//! }
106//! ```
107//! The MIR of none of these functions will contain an explicit call to
108//! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
109//! an instance of this function. Thus, whenever we encounter a function or
110//! method in operand position, we treat it as a use of the current
111//! mono item. Calls are just a special case of that.
112//!
113//! #### Drop glue
114//! Drop glue mono items are introduced by MIR drop-statements. The
115//! generated mono item will have additional drop-glue item uses if the
116//! type to be dropped contains nested values that also need to be dropped. It
117//! might also have a function item use for the explicit `Drop::drop`
118//! implementation of its type.
119//!
120//! #### Unsizing Casts
121//! A subtle way of introducing use edges is by casting to a trait object.
122//! Since the resulting wide-pointer contains a reference to a vtable, we need to
123//! instantiate all dyn-compatible methods of the trait, as we need to store
124//! pointers to these functions even if they never get called anywhere. This can
125//! be seen as a special case of taking a function reference.
126//!
127//!
128//! Interaction with Cross-Crate Inlining
129//! -------------------------------------
130//! The binary of a crate will not only contain machine code for the items
131//! defined in the source code of that crate. It will also contain monomorphic
132//! instantiations of any extern generic functions and of functions marked with
133//! `#[inline]`.
134//! The collection algorithm handles this more or less mono. If it is
135//! about to create a mono item for something with an external `DefId`,
136//! it will take a look if the MIR for that item is available, and if so just
137//! proceed normally. If the MIR is not available, it assumes that the item is
138//! just linked to and no node is created; which is exactly what we want, since
139//! no machine code should be generated in the current crate for such an item.
140//!
141//! Eager and Lazy Collection Strategy
142//! ----------------------------------
143//! Mono item collection can be performed with one of two strategies:
144//!
145//! - Lazy strategy means that items will only be instantiated when actually
146//!   used. The goal is to produce the least amount of machine code
147//!   possible.
148//!
149//! - Eager strategy is meant to be used in conjunction with incremental compilation
150//!   where a stable set of mono items is more important than a minimal
151//!   one. Thus, eager strategy will instantiate drop-glue for every drop-able type
152//!   in the crate, even if no drop call for that type exists (yet). It will
153//!   also instantiate default implementations of trait methods, something that
154//!   otherwise is only done on demand.
155//!
156//! Collection-time const evaluation and "mentioned" items
157//! ------------------------------------------------------
158//!
159//! One important role of collection is to evaluate all constants that are used by all the items
160//! which are being collected. Codegen can then rely on only encountering constants that evaluate
161//! successfully, and if a constant fails to evaluate, the collector has much better context to be
162//! able to show where this constant comes up.
163//!
164//! However, the exact set of "used" items (collected as described above), and therefore the exact
165//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
166//! a function call that uses a failing constant, so an unoptimized build may fail where an
167//! optimized build succeeds. This is undesirable.
168//!
169//! To avoid this, the collector has the concept of "mentioned" items. Some time during the MIR
170//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
171//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
172//! dead code and get optimized away (which makes them no longer "used"), they are still
173//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
174//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
175//! whether we are visiting a used item or merely a mentioned item.
176//!
177//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
178//! need to stay in sync in the following sense:
179//!
180//! - For every item that the collector gather that could eventually lead to build failure (most
181//!   likely due to containing a constant that fails to evaluate), a corresponding mentioned item
182//!   must be added. This should use the exact same strategy as the ecollector to make sure they are
183//!   in sync. However, while the collector works on monomorphized types, mentioned items are
184//!   collected on generic MIR -- so any time the collector checks for a particular type (such as
185//!   `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
186//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
187//!   would have done during regular MIR visiting. Basically you can think of the collector having
188//!   two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
189//!   literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
190//!   duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
191//!   `visit_mentioned_item`.
192//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
193//!   its MIR traversal with exactly what mentioned item gathering would have added in the same
194//!   situation. This detects mentioned items that have *not* been optimized away and hence don't
195//!   need a dedicated traversal.
196//!
197//! Open Issues
198//! -----------
199//! Some things are not yet fully implemented in the current version of this
200//! module.
201//!
202//! ### Const Fns
203//! Ideally, no mono item should be generated for const fns unless there
204//! is a call to them that cannot be evaluated at compile time. At the moment
205//! this is not implemented however: a mono item will be produced
206//! regardless of whether it is actually needed or not.
207
208mod autodiff;
209
210use std::cell::OnceCell;
211use std::ops::ControlFlow;
212
213use rustc_data_structures::fx::FxIndexMap;
214use rustc_data_structures::sync::{MTLock, par_for_each_in};
215use rustc_data_structures::unord::{UnordMap, UnordSet};
216use rustc_hir as hir;
217use rustc_hir::attrs::InlineAttr;
218use rustc_hir::def::DefKind;
219use rustc_hir::def_id::{DefId, DefIdMap, LocalDefId};
220use rustc_hir::lang_items::LangItem;
221use rustc_hir::limit::Limit;
222use rustc_middle::middle::codegen_fn_attrs::CodegenFnAttrFlags;
223use rustc_middle::mir::interpret::{AllocId, ErrorHandled, GlobalAlloc, Scalar};
224use rustc_middle::mir::mono::{
225    CollectionMode, InstantiationMode, MonoItem, NormalizationErrorInMono,
226};
227use rustc_middle::mir::visit::Visitor as MirVisitor;
228use rustc_middle::mir::{self, Body, Location, MentionedItem, traversal};
229use rustc_middle::query::TyCtxtAt;
230use rustc_middle::ty::adjustment::{CustomCoerceUnsized, PointerCoercion};
231use rustc_middle::ty::layout::ValidityRequirement;
232use rustc_middle::ty::{
233    self, GenericArgs, GenericParamDefKind, Instance, InstanceKind, Ty, TyCtxt, TypeFoldable,
234    TypeVisitable, TypeVisitableExt, TypeVisitor, VtblEntry,
235};
236use rustc_middle::util::Providers;
237use rustc_middle::{bug, span_bug};
238use rustc_session::config::{DebugInfo, EntryFnType};
239use rustc_span::source_map::{Spanned, dummy_spanned, respan};
240use rustc_span::{DUMMY_SP, Span};
241use tracing::{debug, instrument, trace};
242
243use crate::collector::autodiff::collect_autodiff_fn;
244use crate::errors::{
245    self, EncounteredErrorWhileInstantiating, EncounteredErrorWhileInstantiatingGlobalAsm,
246    NoOptimizedMir, RecursionLimit,
247};
248
249#[derive(PartialEq)]
250pub(crate) enum MonoItemCollectionStrategy {
251    Eager,
252    Lazy,
253}
254
255/// The state that is shared across the concurrent threads that are doing collection.
256struct SharedState<'tcx> {
257    /// Items that have been or are currently being recursively collected.
258    visited: MTLock<UnordSet<MonoItem<'tcx>>>,
259    /// Items that have been or are currently being recursively treated as "mentioned", i.e., their
260    /// consts are evaluated but nothing is added to the collection.
261    mentioned: MTLock<UnordSet<MonoItem<'tcx>>>,
262    /// Which items are being used where, for better errors.
263    usage_map: MTLock<UsageMap<'tcx>>,
264}
265
266pub(crate) struct UsageMap<'tcx> {
267    // Maps every mono item to the mono items used by it.
268    pub used_map: UnordMap<MonoItem<'tcx>, Vec<MonoItem<'tcx>>>,
269
270    // Maps each mono item with users to the mono items that use it.
271    // Be careful: subsets `used_map`, so unused items are vacant.
272    user_map: UnordMap<MonoItem<'tcx>, Vec<MonoItem<'tcx>>>,
273}
274
275impl<'tcx> UsageMap<'tcx> {
276    fn new() -> UsageMap<'tcx> {
277        UsageMap { used_map: Default::default(), user_map: Default::default() }
278    }
279
280    fn record_used<'a>(&mut self, user_item: MonoItem<'tcx>, used_items: &'a MonoItems<'tcx>)
281    where
282        'tcx: 'a,
283    {
284        for used_item in used_items.items() {
285            self.user_map.entry(used_item).or_default().push(user_item);
286        }
287
288        assert!(self.used_map.insert(user_item, used_items.items().collect()).is_none());
289    }
290
291    pub(crate) fn get_user_items(&self, item: MonoItem<'tcx>) -> &[MonoItem<'tcx>] {
292        self.user_map.get(&item).map(|items| items.as_slice()).unwrap_or(&[])
293    }
294
295    /// Internally iterate over all inlined items used by `item`.
296    pub(crate) fn for_each_inlined_used_item<F>(
297        &self,
298        tcx: TyCtxt<'tcx>,
299        item: MonoItem<'tcx>,
300        mut f: F,
301    ) where
302        F: FnMut(MonoItem<'tcx>),
303    {
304        let used_items = self.used_map.get(&item).unwrap();
305        for used_item in used_items.iter() {
306            let is_inlined = used_item.instantiation_mode(tcx) == InstantiationMode::LocalCopy;
307            if is_inlined {
308                f(*used_item);
309            }
310        }
311    }
312}
313
314struct MonoItems<'tcx> {
315    // We want a set of MonoItem + Span where trying to re-insert a MonoItem with a different Span
316    // is ignored. Map does that, but it looks odd.
317    items: FxIndexMap<MonoItem<'tcx>, Span>,
318}
319
320impl<'tcx> MonoItems<'tcx> {
321    fn new() -> Self {
322        Self { items: FxIndexMap::default() }
323    }
324
325    fn is_empty(&self) -> bool {
326        self.items.is_empty()
327    }
328
329    fn push(&mut self, item: Spanned<MonoItem<'tcx>>) {
330        // Insert only if the entry does not exist. A normal insert would stomp the first span that
331        // got inserted.
332        self.items.entry(item.node).or_insert(item.span);
333    }
334
335    fn items(&self) -> impl Iterator<Item = MonoItem<'tcx>> {
336        self.items.keys().cloned()
337    }
338}
339
340impl<'tcx> IntoIterator for MonoItems<'tcx> {
341    type Item = Spanned<MonoItem<'tcx>>;
342    type IntoIter = impl Iterator<Item = Spanned<MonoItem<'tcx>>>;
343
344    fn into_iter(self) -> Self::IntoIter {
345        self.items.into_iter().map(|(item, span)| respan(span, item))
346    }
347}
348
349impl<'tcx> Extend<Spanned<MonoItem<'tcx>>> for MonoItems<'tcx> {
350    fn extend<I>(&mut self, iter: I)
351    where
352        I: IntoIterator<Item = Spanned<MonoItem<'tcx>>>,
353    {
354        for item in iter {
355            self.push(item)
356        }
357    }
358}
359
360fn collect_items_root<'tcx>(
361    tcx: TyCtxt<'tcx>,
362    starting_item: Spanned<MonoItem<'tcx>>,
363    state: &SharedState<'tcx>,
364    recursion_limit: Limit,
365) {
366    if !state.visited.lock_mut().insert(starting_item.node) {
367        // We've been here already, no need to search again.
368        return;
369    }
370    let mut recursion_depths = DefIdMap::default();
371    collect_items_rec(
372        tcx,
373        starting_item,
374        state,
375        &mut recursion_depths,
376        recursion_limit,
377        CollectionMode::UsedItems,
378    );
379}
380
381/// Collect all monomorphized items reachable from `starting_point`, and emit a note diagnostic if a
382/// post-monomorphization error is encountered during a collection step.
383///
384/// `mode` determined whether we are scanning for [used items][CollectionMode::UsedItems]
385/// or [mentioned items][CollectionMode::MentionedItems].
386#[instrument(skip(tcx, state, recursion_depths, recursion_limit), level = "debug")]
387fn collect_items_rec<'tcx>(
388    tcx: TyCtxt<'tcx>,
389    starting_item: Spanned<MonoItem<'tcx>>,
390    state: &SharedState<'tcx>,
391    recursion_depths: &mut DefIdMap<usize>,
392    recursion_limit: Limit,
393    mode: CollectionMode,
394) {
395    let mut used_items = MonoItems::new();
396    let mut mentioned_items = MonoItems::new();
397    let recursion_depth_reset;
398
399    // Post-monomorphization errors MVP
400    //
401    // We can encounter errors while monomorphizing an item, but we don't have a good way of
402    // showing a complete stack of spans ultimately leading to collecting the erroneous one yet.
403    // (It's also currently unclear exactly which diagnostics and information would be interesting
404    // to report in such cases)
405    //
406    // This leads to suboptimal error reporting: a post-monomorphization error (PME) will be
407    // shown with just a spanned piece of code causing the error, without information on where
408    // it was called from. This is especially obscure if the erroneous mono item is in a
409    // dependency. See for example issue #85155, where, before minimization, a PME happened two
410    // crates downstream from libcore's stdarch, without a way to know which dependency was the
411    // cause.
412    //
413    // If such an error occurs in the current crate, its span will be enough to locate the
414    // source. If the cause is in another crate, the goal here is to quickly locate which mono
415    // item in the current crate is ultimately responsible for causing the error.
416    //
417    // To give at least _some_ context to the user: while collecting mono items, we check the
418    // error count. If it has changed, a PME occurred, and we trigger some diagnostics about the
419    // current step of mono items collection.
420    //
421    // FIXME: don't rely on global state, instead bubble up errors. Note: this is very hard to do.
422    let error_count = tcx.dcx().err_count();
423
424    // In `mentioned_items` we collect items that were mentioned in this MIR but possibly do not
425    // need to be monomorphized. This is done to ensure that optimizing away function calls does not
426    // hide const-eval errors that those calls would otherwise have triggered.
427    match starting_item.node {
428        MonoItem::Static(def_id) => {
429            recursion_depth_reset = None;
430
431            // Statics always get evaluated (which is possible because they can't be generic), so for
432            // `MentionedItems` collection there's nothing to do here.
433            if mode == CollectionMode::UsedItems {
434                let instance = Instance::mono(tcx, def_id);
435
436                // Sanity check whether this ended up being collected accidentally
437                debug_assert!(tcx.should_codegen_locally(instance));
438
439                let DefKind::Static { nested, .. } = tcx.def_kind(def_id) else { bug!() };
440                // Nested statics have no type.
441                if !nested {
442                    let ty = instance.ty(tcx, ty::TypingEnv::fully_monomorphized());
443                    visit_drop_use(tcx, ty, true, starting_item.span, &mut used_items);
444                }
445
446                if let Ok(alloc) = tcx.eval_static_initializer(def_id) {
447                    for &prov in alloc.inner().provenance().ptrs().values() {
448                        collect_alloc(tcx, prov.alloc_id(), &mut used_items);
449                    }
450                }
451
452                if tcx.needs_thread_local_shim(def_id) {
453                    used_items.push(respan(
454                        starting_item.span,
455                        MonoItem::Fn(Instance {
456                            def: InstanceKind::ThreadLocalShim(def_id),
457                            args: GenericArgs::empty(),
458                        }),
459                    ));
460                }
461            }
462
463            // mentioned_items stays empty since there's no codegen for statics. statics don't get
464            // optimized, and if they did then the const-eval interpreter would have to worry about
465            // mentioned_items.
466        }
467        MonoItem::Fn(instance) => {
468            // Sanity check whether this ended up being collected accidentally
469            debug_assert!(tcx.should_codegen_locally(instance));
470
471            // Keep track of the monomorphization recursion depth
472            recursion_depth_reset = Some(check_recursion_limit(
473                tcx,
474                instance,
475                starting_item.span,
476                recursion_depths,
477                recursion_limit,
478            ));
479
480            rustc_data_structures::stack::ensure_sufficient_stack(|| {
481                let Ok((used, mentioned)) = tcx.items_of_instance((instance, mode)) else {
482                    // Normalization errors here are usually due to trait solving overflow.
483                    // FIXME: I assume that there are few type errors at post-analysis stage, but not
484                    // entirely sure.
485                    // We have to emit the error outside of `items_of_instance` to access the
486                    // span of the `starting_item`.
487                    let def_id = instance.def_id();
488                    let def_span = tcx.def_span(def_id);
489                    let def_path_str = tcx.def_path_str(def_id);
490                    tcx.dcx().emit_fatal(RecursionLimit {
491                        span: starting_item.span,
492                        instance,
493                        def_span,
494                        def_path_str,
495                    });
496                };
497                used_items.extend(used.into_iter().copied());
498                mentioned_items.extend(mentioned.into_iter().copied());
499            });
500        }
501        MonoItem::GlobalAsm(item_id) => {
502            assert!(
503                mode == CollectionMode::UsedItems,
504                "should never encounter global_asm when collecting mentioned items"
505            );
506            recursion_depth_reset = None;
507
508            let item = tcx.hir_item(item_id);
509            if let hir::ItemKind::GlobalAsm { asm, .. } = item.kind {
510                for (op, op_sp) in asm.operands {
511                    match *op {
512                        hir::InlineAsmOperand::Const { .. } => {
513                            // Only constants which resolve to a plain integer
514                            // are supported. Therefore the value should not
515                            // depend on any other items.
516                        }
517                        hir::InlineAsmOperand::SymFn { expr } => {
518                            let fn_ty = tcx.typeck(item_id.owner_id).expr_ty(expr);
519                            visit_fn_use(tcx, fn_ty, false, *op_sp, &mut used_items);
520                        }
521                        hir::InlineAsmOperand::SymStatic { path: _, def_id } => {
522                            let instance = Instance::mono(tcx, def_id);
523                            if tcx.should_codegen_locally(instance) {
524                                trace!("collecting static {:?}", def_id);
525                                used_items.push(dummy_spanned(MonoItem::Static(def_id)));
526                            }
527                        }
528                        hir::InlineAsmOperand::In { .. }
529                        | hir::InlineAsmOperand::Out { .. }
530                        | hir::InlineAsmOperand::InOut { .. }
531                        | hir::InlineAsmOperand::SplitInOut { .. }
532                        | hir::InlineAsmOperand::Label { .. } => {
533                            span_bug!(*op_sp, "invalid operand type for global_asm!")
534                        }
535                    }
536                }
537            } else {
538                span_bug!(item.span, "Mismatch between hir::Item type and MonoItem type")
539            }
540
541            // mention_items stays empty as nothing gets optimized here.
542        }
543    };
544
545    // Check for PMEs and emit a diagnostic if one happened. To try to show relevant edges of the
546    // mono item graph.
547    if tcx.dcx().err_count() > error_count
548        && starting_item.node.is_generic_fn()
549        && starting_item.node.is_user_defined()
550    {
551        match starting_item.node {
552            MonoItem::Fn(instance) => tcx.dcx().emit_note(EncounteredErrorWhileInstantiating {
553                span: starting_item.span,
554                kind: "fn",
555                instance,
556            }),
557            MonoItem::Static(def_id) => tcx.dcx().emit_note(EncounteredErrorWhileInstantiating {
558                span: starting_item.span,
559                kind: "static",
560                instance: Instance::new_raw(def_id, GenericArgs::empty()),
561            }),
562            MonoItem::GlobalAsm(_) => {
563                tcx.dcx().emit_note(EncounteredErrorWhileInstantiatingGlobalAsm {
564                    span: starting_item.span,
565                })
566            }
567        }
568    }
569    // Only updating `usage_map` for used items as otherwise we may be inserting the same item
570    // multiple times (if it is first 'mentioned' and then later actually used), and the usage map
571    // logic does not like that.
572    // This is part of the output of collection and hence only relevant for "used" items.
573    // ("Mentioned" items are only considered internally during collection.)
574    if mode == CollectionMode::UsedItems {
575        state.usage_map.lock_mut().record_used(starting_item.node, &used_items);
576    }
577
578    {
579        let mut visited = OnceCell::default();
580        if mode == CollectionMode::UsedItems {
581            used_items
582                .items
583                .retain(|k, _| visited.get_mut_or_init(|| state.visited.lock_mut()).insert(*k));
584        }
585
586        let mut mentioned = OnceCell::default();
587        mentioned_items.items.retain(|k, _| {
588            !visited.get_or_init(|| state.visited.lock()).contains(k)
589                && mentioned.get_mut_or_init(|| state.mentioned.lock_mut()).insert(*k)
590        });
591    }
592    if mode == CollectionMode::MentionedItems {
593        assert!(used_items.is_empty(), "'mentioned' collection should never encounter used items");
594    } else {
595        for used_item in used_items {
596            collect_items_rec(
597                tcx,
598                used_item,
599                state,
600                recursion_depths,
601                recursion_limit,
602                CollectionMode::UsedItems,
603            );
604        }
605    }
606
607    // Walk over mentioned items *after* used items, so that if an item is both mentioned and used then
608    // the loop above has fully collected it, so this loop will skip it.
609    for mentioned_item in mentioned_items {
610        collect_items_rec(
611            tcx,
612            mentioned_item,
613            state,
614            recursion_depths,
615            recursion_limit,
616            CollectionMode::MentionedItems,
617        );
618    }
619
620    if let Some((def_id, depth)) = recursion_depth_reset {
621        recursion_depths.insert(def_id, depth);
622    }
623}
624
625// Check whether we can normalize every type in the instantiated MIR body.
626fn check_normalization_error<'tcx>(
627    tcx: TyCtxt<'tcx>,
628    instance: Instance<'tcx>,
629    body: &Body<'tcx>,
630) -> Result<(), NormalizationErrorInMono> {
631    struct NormalizationChecker<'tcx> {
632        tcx: TyCtxt<'tcx>,
633        instance: Instance<'tcx>,
634    }
635    impl<'tcx> TypeVisitor<TyCtxt<'tcx>> for NormalizationChecker<'tcx> {
636        type Result = ControlFlow<()>;
637
638        fn visit_ty(&mut self, t: Ty<'tcx>) -> Self::Result {
639            match self.instance.try_instantiate_mir_and_normalize_erasing_regions(
640                self.tcx,
641                ty::TypingEnv::fully_monomorphized(),
642                ty::EarlyBinder::bind(t),
643            ) {
644                Ok(_) => ControlFlow::Continue(()),
645                Err(_) => ControlFlow::Break(()),
646            }
647        }
648    }
649
650    let mut checker = NormalizationChecker { tcx, instance };
651    if body.visit_with(&mut checker).is_break() { Err(NormalizationErrorInMono) } else { Ok(()) }
652}
653
654fn check_recursion_limit<'tcx>(
655    tcx: TyCtxt<'tcx>,
656    instance: Instance<'tcx>,
657    span: Span,
658    recursion_depths: &mut DefIdMap<usize>,
659    recursion_limit: Limit,
660) -> (DefId, usize) {
661    let def_id = instance.def_id();
662    let recursion_depth = recursion_depths.get(&def_id).cloned().unwrap_or(0);
663    debug!(" => recursion depth={}", recursion_depth);
664
665    let adjusted_recursion_depth = if tcx.is_lang_item(def_id, LangItem::DropInPlace) {
666        // HACK: drop_in_place creates tight monomorphization loops. Give
667        // it more margin.
668        recursion_depth / 4
669    } else {
670        recursion_depth
671    };
672
673    // Code that needs to instantiate the same function recursively
674    // more than the recursion limit is assumed to be causing an
675    // infinite expansion.
676    if !recursion_limit.value_within_limit(adjusted_recursion_depth) {
677        let def_span = tcx.def_span(def_id);
678        let def_path_str = tcx.def_path_str(def_id);
679        tcx.dcx().emit_fatal(RecursionLimit { span, instance, def_span, def_path_str });
680    }
681
682    recursion_depths.insert(def_id, recursion_depth + 1);
683
684    (def_id, recursion_depth)
685}
686
687struct MirUsedCollector<'a, 'tcx> {
688    tcx: TyCtxt<'tcx>,
689    body: &'a mir::Body<'tcx>,
690    used_items: &'a mut MonoItems<'tcx>,
691    /// See the comment in `collect_items_of_instance` for the purpose of this set.
692    /// Note that this contains *not-monomorphized* items!
693    used_mentioned_items: &'a mut UnordSet<MentionedItem<'tcx>>,
694    instance: Instance<'tcx>,
695}
696
697impl<'a, 'tcx> MirUsedCollector<'a, 'tcx> {
698    fn monomorphize<T>(&self, value: T) -> T
699    where
700        T: TypeFoldable<TyCtxt<'tcx>>,
701    {
702        trace!("monomorphize: self.instance={:?}", self.instance);
703        self.instance.instantiate_mir_and_normalize_erasing_regions(
704            self.tcx,
705            ty::TypingEnv::fully_monomorphized(),
706            ty::EarlyBinder::bind(value),
707        )
708    }
709
710    /// Evaluates a *not yet monomorphized* constant.
711    fn eval_constant(&mut self, constant: &mir::ConstOperand<'tcx>) -> Option<mir::ConstValue> {
712        let const_ = self.monomorphize(constant.const_);
713        // Evaluate the constant. This makes const eval failure a collection-time error (rather than
714        // a codegen-time error). rustc stops after collection if there was an error, so this
715        // ensures codegen never has to worry about failing consts.
716        // (codegen relies on this and ICEs will happen if this is violated.)
717        match const_.eval(self.tcx, ty::TypingEnv::fully_monomorphized(), constant.span) {
718            Ok(v) => Some(v),
719            Err(ErrorHandled::TooGeneric(..)) => span_bug!(
720                constant.span,
721                "collection encountered polymorphic constant: {:?}",
722                const_
723            ),
724            Err(err @ ErrorHandled::Reported(..)) => {
725                err.emit_note(self.tcx);
726                return None;
727            }
728        }
729    }
730}
731
732impl<'a, 'tcx> MirVisitor<'tcx> for MirUsedCollector<'a, 'tcx> {
733    fn visit_rvalue(&mut self, rvalue: &mir::Rvalue<'tcx>, location: Location) {
734        debug!("visiting rvalue {:?}", *rvalue);
735
736        let span = self.body.source_info(location).span;
737
738        match *rvalue {
739            // When doing an cast from a regular pointer to a wide pointer, we
740            // have to instantiate all methods of the trait being cast to, so we
741            // can build the appropriate vtable.
742            mir::Rvalue::Cast(
743                mir::CastKind::PointerCoercion(PointerCoercion::Unsize, _),
744                ref operand,
745                target_ty,
746            ) => {
747                let source_ty = operand.ty(self.body, self.tcx);
748                // *Before* monomorphizing, record that we already handled this mention.
749                self.used_mentioned_items
750                    .insert(MentionedItem::UnsizeCast { source_ty, target_ty });
751                let target_ty = self.monomorphize(target_ty);
752                let source_ty = self.monomorphize(source_ty);
753                let (source_ty, target_ty) =
754                    find_tails_for_unsizing(self.tcx.at(span), source_ty, target_ty);
755                // This could also be a different Unsize instruction, like
756                // from a fixed sized array to a slice. But we are only
757                // interested in things that produce a vtable.
758                if target_ty.is_trait() && !source_ty.is_trait() {
759                    create_mono_items_for_vtable_methods(
760                        self.tcx,
761                        target_ty,
762                        source_ty,
763                        span,
764                        self.used_items,
765                    );
766                }
767            }
768            mir::Rvalue::Cast(
769                mir::CastKind::PointerCoercion(PointerCoercion::ReifyFnPointer(_), _),
770                ref operand,
771                _,
772            ) => {
773                let fn_ty = operand.ty(self.body, self.tcx);
774                // *Before* monomorphizing, record that we already handled this mention.
775                self.used_mentioned_items.insert(MentionedItem::Fn(fn_ty));
776                let fn_ty = self.monomorphize(fn_ty);
777                visit_fn_use(self.tcx, fn_ty, false, span, self.used_items);
778            }
779            mir::Rvalue::Cast(
780                mir::CastKind::PointerCoercion(PointerCoercion::ClosureFnPointer(_), _),
781                ref operand,
782                _,
783            ) => {
784                let source_ty = operand.ty(self.body, self.tcx);
785                // *Before* monomorphizing, record that we already handled this mention.
786                self.used_mentioned_items.insert(MentionedItem::Closure(source_ty));
787                let source_ty = self.monomorphize(source_ty);
788                if let ty::Closure(def_id, args) = *source_ty.kind() {
789                    let instance =
790                        Instance::resolve_closure(self.tcx, def_id, args, ty::ClosureKind::FnOnce);
791                    if self.tcx.should_codegen_locally(instance) {
792                        self.used_items.push(create_fn_mono_item(self.tcx, instance, span));
793                    }
794                } else {
795                    bug!()
796                }
797            }
798            mir::Rvalue::ThreadLocalRef(def_id) => {
799                assert!(self.tcx.is_thread_local_static(def_id));
800                let instance = Instance::mono(self.tcx, def_id);
801                if self.tcx.should_codegen_locally(instance) {
802                    trace!("collecting thread-local static {:?}", def_id);
803                    self.used_items.push(respan(span, MonoItem::Static(def_id)));
804                }
805            }
806            _ => { /* not interesting */ }
807        }
808
809        self.super_rvalue(rvalue, location);
810    }
811
812    /// This does not walk the MIR of the constant as that is not needed for codegen, all we need is
813    /// to ensure that the constant evaluates successfully and walk the result.
814    #[instrument(skip(self), level = "debug")]
815    fn visit_const_operand(&mut self, constant: &mir::ConstOperand<'tcx>, _location: Location) {
816        // No `super_constant` as we don't care about `visit_ty`/`visit_ty_const`.
817        let Some(val) = self.eval_constant(constant) else { return };
818        collect_const_value(self.tcx, val, self.used_items);
819    }
820
821    fn visit_terminator(&mut self, terminator: &mir::Terminator<'tcx>, location: Location) {
822        debug!("visiting terminator {:?} @ {:?}", terminator, location);
823        let source = self.body.source_info(location).span;
824
825        let tcx = self.tcx;
826        let push_mono_lang_item = |this: &mut Self, lang_item: LangItem| {
827            let instance = Instance::mono(tcx, tcx.require_lang_item(lang_item, source));
828            if tcx.should_codegen_locally(instance) {
829                this.used_items.push(create_fn_mono_item(tcx, instance, source));
830            }
831        };
832
833        match terminator.kind {
834            mir::TerminatorKind::Call { ref func, .. }
835            | mir::TerminatorKind::TailCall { ref func, .. } => {
836                let callee_ty = func.ty(self.body, tcx);
837                // *Before* monomorphizing, record that we already handled this mention.
838                self.used_mentioned_items.insert(MentionedItem::Fn(callee_ty));
839                let callee_ty = self.monomorphize(callee_ty);
840
841                // HACK(explicit_tail_calls): collect tail calls to `#[track_caller]` functions as indirect,
842                // because we later call them as such, to prevent issues with ABI incompatibility.
843                // Ideally we'd replace such tail calls with normal call + return, but this requires
844                // post-mono MIR optimizations, which we don't yet have.
845                let force_indirect_call =
846                    if matches!(terminator.kind, mir::TerminatorKind::TailCall { .. })
847                        && let &ty::FnDef(def_id, args) = callee_ty.kind()
848                        && let instance = ty::Instance::expect_resolve(
849                            self.tcx,
850                            ty::TypingEnv::fully_monomorphized(),
851                            def_id,
852                            args,
853                            source,
854                        )
855                        && instance.def.requires_caller_location(self.tcx)
856                    {
857                        true
858                    } else {
859                        false
860                    };
861
862                visit_fn_use(
863                    self.tcx,
864                    callee_ty,
865                    !force_indirect_call,
866                    source,
867                    &mut self.used_items,
868                )
869            }
870            mir::TerminatorKind::Drop { ref place, .. } => {
871                let ty = place.ty(self.body, self.tcx).ty;
872                // *Before* monomorphizing, record that we already handled this mention.
873                self.used_mentioned_items.insert(MentionedItem::Drop(ty));
874                let ty = self.monomorphize(ty);
875                visit_drop_use(self.tcx, ty, true, source, self.used_items);
876            }
877            mir::TerminatorKind::InlineAsm { ref operands, .. } => {
878                for op in operands {
879                    match *op {
880                        mir::InlineAsmOperand::SymFn { ref value } => {
881                            let fn_ty = value.const_.ty();
882                            // *Before* monomorphizing, record that we already handled this mention.
883                            self.used_mentioned_items.insert(MentionedItem::Fn(fn_ty));
884                            let fn_ty = self.monomorphize(fn_ty);
885                            visit_fn_use(self.tcx, fn_ty, false, source, self.used_items);
886                        }
887                        mir::InlineAsmOperand::SymStatic { def_id } => {
888                            let instance = Instance::mono(self.tcx, def_id);
889                            if self.tcx.should_codegen_locally(instance) {
890                                trace!("collecting asm sym static {:?}", def_id);
891                                self.used_items.push(respan(source, MonoItem::Static(def_id)));
892                            }
893                        }
894                        _ => {}
895                    }
896                }
897            }
898            mir::TerminatorKind::Assert { ref msg, .. } => match &**msg {
899                mir::AssertKind::BoundsCheck { .. } => {
900                    push_mono_lang_item(self, LangItem::PanicBoundsCheck);
901                }
902                mir::AssertKind::MisalignedPointerDereference { .. } => {
903                    push_mono_lang_item(self, LangItem::PanicMisalignedPointerDereference);
904                }
905                mir::AssertKind::NullPointerDereference => {
906                    push_mono_lang_item(self, LangItem::PanicNullPointerDereference);
907                }
908                mir::AssertKind::InvalidEnumConstruction(_) => {
909                    push_mono_lang_item(self, LangItem::PanicInvalidEnumConstruction);
910                }
911                _ => {
912                    push_mono_lang_item(self, msg.panic_function());
913                }
914            },
915            mir::TerminatorKind::UnwindTerminate(reason) => {
916                push_mono_lang_item(self, reason.lang_item());
917            }
918            mir::TerminatorKind::Goto { .. }
919            | mir::TerminatorKind::SwitchInt { .. }
920            | mir::TerminatorKind::UnwindResume
921            | mir::TerminatorKind::Return
922            | mir::TerminatorKind::Unreachable => {}
923            mir::TerminatorKind::CoroutineDrop
924            | mir::TerminatorKind::Yield { .. }
925            | mir::TerminatorKind::FalseEdge { .. }
926            | mir::TerminatorKind::FalseUnwind { .. } => bug!(),
927        }
928
929        if let Some(mir::UnwindAction::Terminate(reason)) = terminator.unwind() {
930            push_mono_lang_item(self, reason.lang_item());
931        }
932
933        self.super_terminator(terminator, location);
934    }
935}
936
937fn visit_drop_use<'tcx>(
938    tcx: TyCtxt<'tcx>,
939    ty: Ty<'tcx>,
940    is_direct_call: bool,
941    source: Span,
942    output: &mut MonoItems<'tcx>,
943) {
944    let instance = Instance::resolve_drop_in_place(tcx, ty);
945    visit_instance_use(tcx, instance, is_direct_call, source, output);
946}
947
948/// For every call of this function in the visitor, make sure there is a matching call in the
949/// `mentioned_items` pass!
950fn visit_fn_use<'tcx>(
951    tcx: TyCtxt<'tcx>,
952    ty: Ty<'tcx>,
953    is_direct_call: bool,
954    source: Span,
955    output: &mut MonoItems<'tcx>,
956) {
957    if let ty::FnDef(def_id, args) = *ty.kind() {
958        let instance = if is_direct_call {
959            ty::Instance::expect_resolve(
960                tcx,
961                ty::TypingEnv::fully_monomorphized(),
962                def_id,
963                args,
964                source,
965            )
966        } else {
967            match ty::Instance::resolve_for_fn_ptr(
968                tcx,
969                ty::TypingEnv::fully_monomorphized(),
970                def_id,
971                args,
972            ) {
973                Some(instance) => instance,
974                _ => bug!("failed to resolve instance for {ty}"),
975            }
976        };
977        visit_instance_use(tcx, instance, is_direct_call, source, output);
978    }
979}
980
981fn visit_instance_use<'tcx>(
982    tcx: TyCtxt<'tcx>,
983    instance: ty::Instance<'tcx>,
984    is_direct_call: bool,
985    source: Span,
986    output: &mut MonoItems<'tcx>,
987) {
988    debug!("visit_item_use({:?}, is_direct_call={:?})", instance, is_direct_call);
989    if !tcx.should_codegen_locally(instance) {
990        return;
991    }
992    if let Some(intrinsic) = tcx.intrinsic(instance.def_id()) {
993        collect_autodiff_fn(tcx, instance, intrinsic, output);
994
995        if let Some(_requirement) = ValidityRequirement::from_intrinsic(intrinsic.name) {
996            // The intrinsics assert_inhabited, assert_zero_valid, and assert_mem_uninitialized_valid will
997            // be lowered in codegen to nothing or a call to panic_nounwind. So if we encounter any
998            // of those intrinsics, we need to include a mono item for panic_nounwind, else we may try to
999            // codegen a call to that function without generating code for the function itself.
1000            let def_id = tcx.require_lang_item(LangItem::PanicNounwind, source);
1001            let panic_instance = Instance::mono(tcx, def_id);
1002            if tcx.should_codegen_locally(panic_instance) {
1003                output.push(create_fn_mono_item(tcx, panic_instance, source));
1004            }
1005        } else if !intrinsic.must_be_overridden {
1006            // Codegen the fallback body of intrinsics with fallback bodies.
1007            // We explicitly skip this otherwise to ensure we get a linker error
1008            // if anyone tries to call this intrinsic and the codegen backend did not
1009            // override the implementation.
1010            let instance = ty::Instance::new_raw(instance.def_id(), instance.args);
1011            if tcx.should_codegen_locally(instance) {
1012                output.push(create_fn_mono_item(tcx, instance, source));
1013            }
1014        }
1015    }
1016
1017    match instance.def {
1018        ty::InstanceKind::Virtual(..) | ty::InstanceKind::Intrinsic(_) => {
1019            if !is_direct_call {
1020                bug!("{:?} being reified", instance);
1021            }
1022        }
1023        ty::InstanceKind::ThreadLocalShim(..) => {
1024            bug!("{:?} being reified", instance);
1025        }
1026        ty::InstanceKind::DropGlue(_, None) => {
1027            // Don't need to emit noop drop glue if we are calling directly.
1028            //
1029            // Note that we also optimize away the call to visit_instance_use in vtable construction
1030            // (see create_mono_items_for_vtable_methods).
1031            if !is_direct_call {
1032                output.push(create_fn_mono_item(tcx, instance, source));
1033            }
1034        }
1035        ty::InstanceKind::DropGlue(_, Some(_))
1036        | ty::InstanceKind::FutureDropPollShim(..)
1037        | ty::InstanceKind::AsyncDropGlue(_, _)
1038        | ty::InstanceKind::AsyncDropGlueCtorShim(_, _)
1039        | ty::InstanceKind::VTableShim(..)
1040        | ty::InstanceKind::ReifyShim(..)
1041        | ty::InstanceKind::ClosureOnceShim { .. }
1042        | ty::InstanceKind::ConstructCoroutineInClosureShim { .. }
1043        | ty::InstanceKind::Item(..)
1044        | ty::InstanceKind::FnPtrShim(..)
1045        | ty::InstanceKind::CloneShim(..)
1046        | ty::InstanceKind::FnPtrAddrShim(..) => {
1047            output.push(create_fn_mono_item(tcx, instance, source));
1048        }
1049    }
1050}
1051
1052/// Returns `true` if we should codegen an instance in the local crate, or returns `false` if we
1053/// can just link to the upstream crate and therefore don't need a mono item.
1054fn should_codegen_locally<'tcx>(tcx: TyCtxt<'tcx>, instance: Instance<'tcx>) -> bool {
1055    let Some(def_id) = instance.def.def_id_if_not_guaranteed_local_codegen() else {
1056        return true;
1057    };
1058
1059    if tcx.is_foreign_item(def_id) {
1060        // Foreign items are always linked against, there's no way of instantiating them.
1061        return false;
1062    }
1063
1064    if tcx.def_kind(def_id).has_codegen_attrs()
1065        && matches!(tcx.codegen_fn_attrs(def_id).inline, InlineAttr::Force { .. })
1066    {
1067        // `#[rustc_force_inline]` items should never be codegened. This should be caught by
1068        // the MIR validator.
1069        tcx.dcx().delayed_bug("attempt to codegen `#[rustc_force_inline]` item");
1070    }
1071
1072    if def_id.is_local() {
1073        // Local items cannot be referred to locally without monomorphizing them locally.
1074        return true;
1075    }
1076
1077    if tcx.is_reachable_non_generic(def_id) || instance.upstream_monomorphization(tcx).is_some() {
1078        // We can link to the item in question, no instance needed in this crate.
1079        return false;
1080    }
1081
1082    if let DefKind::Static { .. } = tcx.def_kind(def_id) {
1083        // We cannot monomorphize statics from upstream crates.
1084        return false;
1085    }
1086
1087    if !tcx.is_mir_available(def_id) {
1088        tcx.dcx().emit_fatal(NoOptimizedMir {
1089            span: tcx.def_span(def_id),
1090            crate_name: tcx.crate_name(def_id.krate),
1091            instance: instance.to_string(),
1092        });
1093    }
1094
1095    true
1096}
1097
1098/// For a given pair of source and target type that occur in an unsizing coercion,
1099/// this function finds the pair of types that determines the vtable linking
1100/// them.
1101///
1102/// For example, the source type might be `&SomeStruct` and the target type
1103/// might be `&dyn SomeTrait` in a cast like:
1104///
1105/// ```rust,ignore (not real code)
1106/// let src: &SomeStruct = ...;
1107/// let target = src as &dyn SomeTrait;
1108/// ```
1109///
1110/// Then the output of this function would be (SomeStruct, SomeTrait) since for
1111/// constructing the `target` wide-pointer we need the vtable for that pair.
1112///
1113/// Things can get more complicated though because there's also the case where
1114/// the unsized type occurs as a field:
1115///
1116/// ```rust
1117/// struct ComplexStruct<T: ?Sized> {
1118///    a: u32,
1119///    b: f64,
1120///    c: T
1121/// }
1122/// ```
1123///
1124/// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
1125/// is unsized, `&SomeStruct` is a wide pointer, and the vtable it points to is
1126/// for the pair of `T` (which is a trait) and the concrete type that `T` was
1127/// originally coerced from:
1128///
1129/// ```rust,ignore (not real code)
1130/// let src: &ComplexStruct<SomeStruct> = ...;
1131/// let target = src as &ComplexStruct<dyn SomeTrait>;
1132/// ```
1133///
1134/// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
1135/// `(SomeStruct, SomeTrait)`.
1136///
1137/// Finally, there is also the case of custom unsizing coercions, e.g., for
1138/// smart pointers such as `Rc` and `Arc`.
1139fn find_tails_for_unsizing<'tcx>(
1140    tcx: TyCtxtAt<'tcx>,
1141    source_ty: Ty<'tcx>,
1142    target_ty: Ty<'tcx>,
1143) -> (Ty<'tcx>, Ty<'tcx>) {
1144    let typing_env = ty::TypingEnv::fully_monomorphized();
1145    debug_assert!(!source_ty.has_param(), "{source_ty} should be fully monomorphic");
1146    debug_assert!(!target_ty.has_param(), "{target_ty} should be fully monomorphic");
1147
1148    match (source_ty.kind(), target_ty.kind()) {
1149        (&ty::Pat(source, _), &ty::Pat(target, _)) => find_tails_for_unsizing(tcx, source, target),
1150        (
1151            &ty::Ref(_, source_pointee, _),
1152            &ty::Ref(_, target_pointee, _) | &ty::RawPtr(target_pointee, _),
1153        )
1154        | (&ty::RawPtr(source_pointee, _), &ty::RawPtr(target_pointee, _)) => {
1155            tcx.struct_lockstep_tails_for_codegen(source_pointee, target_pointee, typing_env)
1156        }
1157
1158        // `Box<T>` could go through the ADT code below, b/c it'll unpeel to `Unique<T>`,
1159        // and eventually bottom out in a raw ref, but we can micro-optimize it here.
1160        (_, _)
1161            if let Some(source_boxed) = source_ty.boxed_ty()
1162                && let Some(target_boxed) = target_ty.boxed_ty() =>
1163        {
1164            tcx.struct_lockstep_tails_for_codegen(source_boxed, target_boxed, typing_env)
1165        }
1166
1167        (&ty::Adt(source_adt_def, source_args), &ty::Adt(target_adt_def, target_args)) => {
1168            assert_eq!(source_adt_def, target_adt_def);
1169            let CustomCoerceUnsized::Struct(coerce_index) =
1170                match crate::custom_coerce_unsize_info(tcx, source_ty, target_ty) {
1171                    Ok(ccu) => ccu,
1172                    Err(e) => {
1173                        let e = Ty::new_error(tcx.tcx, e);
1174                        return (e, e);
1175                    }
1176                };
1177            let coerce_field = &source_adt_def.non_enum_variant().fields[coerce_index];
1178            // We're getting a possibly unnormalized type, so normalize it.
1179            let source_field =
1180                tcx.normalize_erasing_regions(typing_env, coerce_field.ty(*tcx, source_args));
1181            let target_field =
1182                tcx.normalize_erasing_regions(typing_env, coerce_field.ty(*tcx, target_args));
1183            find_tails_for_unsizing(tcx, source_field, target_field)
1184        }
1185
1186        _ => bug!(
1187            "find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?}",
1188            source_ty,
1189            target_ty
1190        ),
1191    }
1192}
1193
1194#[instrument(skip(tcx), level = "debug", ret)]
1195fn create_fn_mono_item<'tcx>(
1196    tcx: TyCtxt<'tcx>,
1197    instance: Instance<'tcx>,
1198    source: Span,
1199) -> Spanned<MonoItem<'tcx>> {
1200    let def_id = instance.def_id();
1201    if tcx.sess.opts.unstable_opts.profile_closures
1202        && def_id.is_local()
1203        && tcx.is_closure_like(def_id)
1204    {
1205        crate::util::dump_closure_profile(tcx, instance);
1206    }
1207
1208    respan(source, MonoItem::Fn(instance))
1209}
1210
1211/// Creates a `MonoItem` for each method that is referenced by the vtable for
1212/// the given trait/impl pair.
1213fn create_mono_items_for_vtable_methods<'tcx>(
1214    tcx: TyCtxt<'tcx>,
1215    trait_ty: Ty<'tcx>,
1216    impl_ty: Ty<'tcx>,
1217    source: Span,
1218    output: &mut MonoItems<'tcx>,
1219) {
1220    assert!(!trait_ty.has_escaping_bound_vars() && !impl_ty.has_escaping_bound_vars());
1221
1222    let ty::Dynamic(trait_ty, ..) = trait_ty.kind() else {
1223        bug!("create_mono_items_for_vtable_methods: {trait_ty:?} not a trait type");
1224    };
1225    if let Some(principal) = trait_ty.principal() {
1226        let trait_ref =
1227            tcx.instantiate_bound_regions_with_erased(principal.with_self_ty(tcx, impl_ty));
1228        assert!(!trait_ref.has_escaping_bound_vars());
1229
1230        // Walk all methods of the trait, including those of its supertraits
1231        let entries = tcx.vtable_entries(trait_ref);
1232        debug!(?entries);
1233        let methods = entries
1234            .iter()
1235            .filter_map(|entry| match entry {
1236                VtblEntry::MetadataDropInPlace
1237                | VtblEntry::MetadataSize
1238                | VtblEntry::MetadataAlign
1239                | VtblEntry::Vacant => None,
1240                VtblEntry::TraitVPtr(_) => {
1241                    // all super trait items already covered, so skip them.
1242                    None
1243                }
1244                VtblEntry::Method(instance) => {
1245                    Some(*instance).filter(|instance| tcx.should_codegen_locally(*instance))
1246                }
1247            })
1248            .map(|item| create_fn_mono_item(tcx, item, source));
1249        output.extend(methods);
1250    }
1251
1252    // Also add the destructor, if it's necessary.
1253    //
1254    // This matches the check in vtable_allocation_provider in middle/ty/vtable.rs,
1255    // if we don't need drop we're not adding an actual pointer to the vtable.
1256    if impl_ty.needs_drop(tcx, ty::TypingEnv::fully_monomorphized()) {
1257        visit_drop_use(tcx, impl_ty, false, source, output);
1258    }
1259}
1260
1261/// Scans the CTFE alloc in order to find function pointers and statics that must be monomorphized.
1262fn collect_alloc<'tcx>(tcx: TyCtxt<'tcx>, alloc_id: AllocId, output: &mut MonoItems<'tcx>) {
1263    match tcx.global_alloc(alloc_id) {
1264        GlobalAlloc::Static(def_id) => {
1265            assert!(!tcx.is_thread_local_static(def_id));
1266            let instance = Instance::mono(tcx, def_id);
1267            if tcx.should_codegen_locally(instance) {
1268                trace!("collecting static {:?}", def_id);
1269                output.push(dummy_spanned(MonoItem::Static(def_id)));
1270            }
1271        }
1272        GlobalAlloc::Memory(alloc) => {
1273            trace!("collecting {:?} with {:#?}", alloc_id, alloc);
1274            let ptrs = alloc.inner().provenance().ptrs();
1275            // avoid `ensure_sufficient_stack` in the common case of "no pointers"
1276            if !ptrs.is_empty() {
1277                rustc_data_structures::stack::ensure_sufficient_stack(move || {
1278                    for &prov in ptrs.values() {
1279                        collect_alloc(tcx, prov.alloc_id(), output);
1280                    }
1281                });
1282            }
1283        }
1284        GlobalAlloc::Function { instance, .. } => {
1285            if tcx.should_codegen_locally(instance) {
1286                trace!("collecting {:?} with {:#?}", alloc_id, instance);
1287                output.push(create_fn_mono_item(tcx, instance, DUMMY_SP));
1288            }
1289        }
1290        GlobalAlloc::VTable(ty, dyn_ty) => {
1291            let alloc_id = tcx.vtable_allocation((
1292                ty,
1293                dyn_ty
1294                    .principal()
1295                    .map(|principal| tcx.instantiate_bound_regions_with_erased(principal)),
1296            ));
1297            collect_alloc(tcx, alloc_id, output)
1298        }
1299        GlobalAlloc::TypeId { .. } => {}
1300    }
1301}
1302
1303/// Scans the MIR in order to find function calls, closures, and drop-glue.
1304///
1305/// Anything that's found is added to `output`. Furthermore the "mentioned items" of the MIR are returned.
1306#[instrument(skip(tcx), level = "debug")]
1307fn collect_items_of_instance<'tcx>(
1308    tcx: TyCtxt<'tcx>,
1309    instance: Instance<'tcx>,
1310    mode: CollectionMode,
1311) -> Result<(MonoItems<'tcx>, MonoItems<'tcx>), NormalizationErrorInMono> {
1312    // This item is getting monomorphized, do mono-time checks.
1313    let body = tcx.instance_mir(instance.def);
1314    // Plenty of code paths later assume that everything can be normalized. So we have to check
1315    // normalization first.
1316    // We choose to emit the error outside to provide helpful diagnostics.
1317    check_normalization_error(tcx, instance, body)?;
1318    tcx.ensure_ok().check_mono_item(instance);
1319
1320    // Naively, in "used" collection mode, all functions get added to *both* `used_items` and
1321    // `mentioned_items`. Mentioned items processing will then notice that they have already been
1322    // visited, but at that point each mentioned item has been monomorphized, added to the
1323    // `mentioned_items` worklist, and checked in the global set of visited items. To remove that
1324    // overhead, we have a special optimization that avoids adding items to `mentioned_items` when
1325    // they are already added in `used_items`. We could just scan `used_items`, but that's a linear
1326    // scan and not very efficient. Furthermore we can only do that *after* monomorphizing the
1327    // mentioned item. So instead we collect all pre-monomorphized `MentionedItem` that were already
1328    // added to `used_items` in a hash set, which can efficiently query in the
1329    // `body.mentioned_items` loop below without even having to monomorphize the item.
1330    let mut used_items = MonoItems::new();
1331    let mut mentioned_items = MonoItems::new();
1332    let mut used_mentioned_items = Default::default();
1333    let mut collector = MirUsedCollector {
1334        tcx,
1335        body,
1336        used_items: &mut used_items,
1337        used_mentioned_items: &mut used_mentioned_items,
1338        instance,
1339    };
1340
1341    if mode == CollectionMode::UsedItems {
1342        if tcx.sess.opts.debuginfo == DebugInfo::Full {
1343            for var_debug_info in &body.var_debug_info {
1344                collector.visit_var_debug_info(var_debug_info);
1345            }
1346        }
1347        for (bb, data) in traversal::mono_reachable(body, tcx, instance) {
1348            collector.visit_basic_block_data(bb, data)
1349        }
1350    }
1351
1352    // Always visit all `required_consts`, so that we evaluate them and abort compilation if any of
1353    // them errors.
1354    for const_op in body.required_consts() {
1355        if let Some(val) = collector.eval_constant(const_op) {
1356            collect_const_value(tcx, val, &mut mentioned_items);
1357        }
1358    }
1359
1360    // Always gather mentioned items. We try to avoid processing items that we have already added to
1361    // `used_items` above.
1362    for item in body.mentioned_items() {
1363        if !collector.used_mentioned_items.contains(&item.node) {
1364            let item_mono = collector.monomorphize(item.node);
1365            visit_mentioned_item(tcx, &item_mono, item.span, &mut mentioned_items);
1366        }
1367    }
1368
1369    Ok((used_items, mentioned_items))
1370}
1371
1372fn items_of_instance<'tcx>(
1373    tcx: TyCtxt<'tcx>,
1374    (instance, mode): (Instance<'tcx>, CollectionMode),
1375) -> Result<
1376    (&'tcx [Spanned<MonoItem<'tcx>>], &'tcx [Spanned<MonoItem<'tcx>>]),
1377    NormalizationErrorInMono,
1378> {
1379    let (used_items, mentioned_items) = collect_items_of_instance(tcx, instance, mode)?;
1380
1381    let used_items = tcx.arena.alloc_from_iter(used_items);
1382    let mentioned_items = tcx.arena.alloc_from_iter(mentioned_items);
1383
1384    Ok((used_items, mentioned_items))
1385}
1386
1387/// `item` must be already monomorphized.
1388#[instrument(skip(tcx, span, output), level = "debug")]
1389fn visit_mentioned_item<'tcx>(
1390    tcx: TyCtxt<'tcx>,
1391    item: &MentionedItem<'tcx>,
1392    span: Span,
1393    output: &mut MonoItems<'tcx>,
1394) {
1395    match *item {
1396        MentionedItem::Fn(ty) => {
1397            if let ty::FnDef(def_id, args) = *ty.kind() {
1398                let instance = Instance::expect_resolve(
1399                    tcx,
1400                    ty::TypingEnv::fully_monomorphized(),
1401                    def_id,
1402                    args,
1403                    span,
1404                );
1405                // `visit_instance_use` was written for "used" item collection but works just as well
1406                // for "mentioned" item collection.
1407                // We can set `is_direct_call`; that just means we'll skip a bunch of shims that anyway
1408                // can't have their own failing constants.
1409                visit_instance_use(tcx, instance, /*is_direct_call*/ true, span, output);
1410            }
1411        }
1412        MentionedItem::Drop(ty) => {
1413            visit_drop_use(tcx, ty, /*is_direct_call*/ true, span, output);
1414        }
1415        MentionedItem::UnsizeCast { source_ty, target_ty } => {
1416            let (source_ty, target_ty) =
1417                find_tails_for_unsizing(tcx.at(span), source_ty, target_ty);
1418            // This could also be a different Unsize instruction, like
1419            // from a fixed sized array to a slice. But we are only
1420            // interested in things that produce a vtable.
1421            if target_ty.is_trait() && !source_ty.is_trait() {
1422                create_mono_items_for_vtable_methods(tcx, target_ty, source_ty, span, output);
1423            }
1424        }
1425        MentionedItem::Closure(source_ty) => {
1426            if let ty::Closure(def_id, args) = *source_ty.kind() {
1427                let instance =
1428                    Instance::resolve_closure(tcx, def_id, args, ty::ClosureKind::FnOnce);
1429                if tcx.should_codegen_locally(instance) {
1430                    output.push(create_fn_mono_item(tcx, instance, span));
1431                }
1432            } else {
1433                bug!()
1434            }
1435        }
1436    }
1437}
1438
1439#[instrument(skip(tcx, output), level = "debug")]
1440fn collect_const_value<'tcx>(
1441    tcx: TyCtxt<'tcx>,
1442    value: mir::ConstValue,
1443    output: &mut MonoItems<'tcx>,
1444) {
1445    match value {
1446        mir::ConstValue::Scalar(Scalar::Ptr(ptr, _size)) => {
1447            collect_alloc(tcx, ptr.provenance.alloc_id(), output)
1448        }
1449        mir::ConstValue::Indirect { alloc_id, .. }
1450        | mir::ConstValue::Slice { alloc_id, meta: _ } => collect_alloc(tcx, alloc_id, output),
1451        _ => {}
1452    }
1453}
1454
1455//=-----------------------------------------------------------------------------
1456// Root Collection
1457//=-----------------------------------------------------------------------------
1458
1459// Find all non-generic items by walking the HIR. These items serve as roots to
1460// start monomorphizing from.
1461#[instrument(skip(tcx, mode), level = "debug")]
1462fn collect_roots(tcx: TyCtxt<'_>, mode: MonoItemCollectionStrategy) -> Vec<MonoItem<'_>> {
1463    debug!("collecting roots");
1464    let mut roots = MonoItems::new();
1465
1466    {
1467        let entry_fn = tcx.entry_fn(());
1468
1469        debug!("collect_roots: entry_fn = {:?}", entry_fn);
1470
1471        let mut collector = RootCollector { tcx, strategy: mode, entry_fn, output: &mut roots };
1472
1473        let crate_items = tcx.hir_crate_items(());
1474
1475        for id in crate_items.free_items() {
1476            collector.process_item(id);
1477        }
1478
1479        for id in crate_items.impl_items() {
1480            collector.process_impl_item(id);
1481        }
1482
1483        for id in crate_items.nested_bodies() {
1484            collector.process_nested_body(id);
1485        }
1486
1487        collector.push_extra_entry_roots();
1488    }
1489
1490    // We can only codegen items that are instantiable - items all of
1491    // whose predicates hold. Luckily, items that aren't instantiable
1492    // can't actually be used, so we can just skip codegenning them.
1493    roots
1494        .into_iter()
1495        .filter_map(|Spanned { node: mono_item, .. }| {
1496            mono_item.is_instantiable(tcx).then_some(mono_item)
1497        })
1498        .collect()
1499}
1500
1501struct RootCollector<'a, 'tcx> {
1502    tcx: TyCtxt<'tcx>,
1503    strategy: MonoItemCollectionStrategy,
1504    output: &'a mut MonoItems<'tcx>,
1505    entry_fn: Option<(DefId, EntryFnType)>,
1506}
1507
1508impl<'v> RootCollector<'_, 'v> {
1509    fn process_item(&mut self, id: hir::ItemId) {
1510        match self.tcx.def_kind(id.owner_id) {
1511            DefKind::Enum | DefKind::Struct | DefKind::Union => {
1512                if self.strategy == MonoItemCollectionStrategy::Eager
1513                    && !self.tcx.generics_of(id.owner_id).requires_monomorphization(self.tcx)
1514                {
1515                    debug!("RootCollector: ADT drop-glue for `{id:?}`",);
1516                    let id_args =
1517                        ty::GenericArgs::for_item(self.tcx, id.owner_id.to_def_id(), |param, _| {
1518                            match param.kind {
1519                                GenericParamDefKind::Lifetime => {
1520                                    self.tcx.lifetimes.re_erased.into()
1521                                }
1522                                GenericParamDefKind::Type { .. }
1523                                | GenericParamDefKind::Const { .. } => {
1524                                    unreachable!(
1525                                        "`own_requires_monomorphization` check means that \
1526                                we should have no type/const params"
1527                                    )
1528                                }
1529                            }
1530                        });
1531
1532                    // This type is impossible to instantiate, so we should not try to
1533                    // generate a `drop_in_place` instance for it.
1534                    if self.tcx.instantiate_and_check_impossible_predicates((
1535                        id.owner_id.to_def_id(),
1536                        id_args,
1537                    )) {
1538                        return;
1539                    }
1540
1541                    let ty =
1542                        self.tcx.type_of(id.owner_id.to_def_id()).instantiate(self.tcx, id_args);
1543                    assert!(!ty.has_non_region_param());
1544                    visit_drop_use(self.tcx, ty, true, DUMMY_SP, self.output);
1545                }
1546            }
1547            DefKind::GlobalAsm => {
1548                debug!(
1549                    "RootCollector: ItemKind::GlobalAsm({})",
1550                    self.tcx.def_path_str(id.owner_id)
1551                );
1552                self.output.push(dummy_spanned(MonoItem::GlobalAsm(id)));
1553            }
1554            DefKind::Static { .. } => {
1555                let def_id = id.owner_id.to_def_id();
1556                debug!("RootCollector: ItemKind::Static({})", self.tcx.def_path_str(def_id));
1557                self.output.push(dummy_spanned(MonoItem::Static(def_id)));
1558            }
1559            DefKind::Const => {
1560                // Const items only generate mono items if they are actually used somewhere.
1561                // Just declaring them is insufficient.
1562
1563                // If we're collecting items eagerly, then recurse into all constants.
1564                // Otherwise the value is only collected when explicitly mentioned in other items.
1565                if self.strategy == MonoItemCollectionStrategy::Eager {
1566                    if !self.tcx.generics_of(id.owner_id).own_requires_monomorphization()
1567                        && let Ok(val) = self.tcx.const_eval_poly(id.owner_id.to_def_id())
1568                    {
1569                        collect_const_value(self.tcx, val, self.output);
1570                    }
1571                }
1572            }
1573            DefKind::Impl { of_trait: true } => {
1574                if self.strategy == MonoItemCollectionStrategy::Eager {
1575                    create_mono_items_for_default_impls(self.tcx, id, self.output);
1576                }
1577            }
1578            DefKind::Fn => {
1579                self.push_if_root(id.owner_id.def_id);
1580            }
1581            _ => {}
1582        }
1583    }
1584
1585    fn process_impl_item(&mut self, id: hir::ImplItemId) {
1586        if self.tcx.def_kind(id.owner_id) == DefKind::AssocFn {
1587            self.push_if_root(id.owner_id.def_id);
1588        }
1589    }
1590
1591    fn process_nested_body(&mut self, def_id: LocalDefId) {
1592        match self.tcx.def_kind(def_id) {
1593            DefKind::Closure => {
1594                // for 'pub async fn foo(..)' also trying to monomorphize foo::{closure}
1595                let is_pub_fn_coroutine =
1596                    match *self.tcx.type_of(def_id).instantiate_identity().kind() {
1597                        ty::Coroutine(cor_id, _args) => {
1598                            let tcx = self.tcx;
1599                            let parent_id = tcx.parent(cor_id);
1600                            tcx.def_kind(parent_id) == DefKind::Fn
1601                                && tcx.asyncness(parent_id).is_async()
1602                                && tcx.visibility(parent_id).is_public()
1603                        }
1604                        ty::Closure(..) | ty::CoroutineClosure(..) => false,
1605                        _ => unreachable!(),
1606                    };
1607                if (self.strategy == MonoItemCollectionStrategy::Eager || is_pub_fn_coroutine)
1608                    && !self
1609                        .tcx
1610                        .generics_of(self.tcx.typeck_root_def_id(def_id.to_def_id()))
1611                        .requires_monomorphization(self.tcx)
1612                {
1613                    let instance = match *self.tcx.type_of(def_id).instantiate_identity().kind() {
1614                        ty::Closure(def_id, args)
1615                        | ty::Coroutine(def_id, args)
1616                        | ty::CoroutineClosure(def_id, args) => {
1617                            Instance::new_raw(def_id, self.tcx.erase_and_anonymize_regions(args))
1618                        }
1619                        _ => unreachable!(),
1620                    };
1621                    let Ok(instance) = self.tcx.try_normalize_erasing_regions(
1622                        ty::TypingEnv::fully_monomorphized(),
1623                        instance,
1624                    ) else {
1625                        // Don't ICE on an impossible-to-normalize closure.
1626                        return;
1627                    };
1628                    let mono_item = create_fn_mono_item(self.tcx, instance, DUMMY_SP);
1629                    if mono_item.node.is_instantiable(self.tcx) {
1630                        self.output.push(mono_item);
1631                    }
1632                }
1633            }
1634            _ => {}
1635        }
1636    }
1637
1638    fn is_root(&self, def_id: LocalDefId) -> bool {
1639        !self.tcx.generics_of(def_id).requires_monomorphization(self.tcx)
1640            && match self.strategy {
1641                MonoItemCollectionStrategy::Eager => {
1642                    !matches!(self.tcx.codegen_fn_attrs(def_id).inline, InlineAttr::Force { .. })
1643                }
1644                MonoItemCollectionStrategy::Lazy => {
1645                    self.entry_fn.and_then(|(id, _)| id.as_local()) == Some(def_id)
1646                        || self.tcx.is_reachable_non_generic(def_id)
1647                        || {
1648                            let flags = self.tcx.codegen_fn_attrs(def_id).flags;
1649                            flags.intersects(
1650                                CodegenFnAttrFlags::RUSTC_STD_INTERNAL_SYMBOL
1651                                    | CodegenFnAttrFlags::EXTERNALLY_IMPLEMENTABLE_ITEM,
1652                            )
1653                        }
1654                }
1655            }
1656    }
1657
1658    /// If `def_id` represents a root, pushes it onto the list of
1659    /// outputs. (Note that all roots must be monomorphic.)
1660    #[instrument(skip(self), level = "debug")]
1661    fn push_if_root(&mut self, def_id: LocalDefId) {
1662        if self.is_root(def_id) {
1663            debug!("found root");
1664
1665            let instance = Instance::mono(self.tcx, def_id.to_def_id());
1666            self.output.push(create_fn_mono_item(self.tcx, instance, DUMMY_SP));
1667        }
1668    }
1669
1670    /// As a special case, when/if we encounter the
1671    /// `main()` function, we also have to generate a
1672    /// monomorphized copy of the start lang item based on
1673    /// the return type of `main`. This is not needed when
1674    /// the user writes their own `start` manually.
1675    fn push_extra_entry_roots(&mut self) {
1676        let Some((main_def_id, EntryFnType::Main { .. })) = self.entry_fn else {
1677            return;
1678        };
1679
1680        let main_instance = Instance::mono(self.tcx, main_def_id);
1681        if self.tcx.should_codegen_locally(main_instance) {
1682            self.output.push(create_fn_mono_item(
1683                self.tcx,
1684                main_instance,
1685                self.tcx.def_span(main_def_id),
1686            ));
1687        }
1688
1689        let Some(start_def_id) = self.tcx.lang_items().start_fn() else {
1690            self.tcx.dcx().emit_fatal(errors::StartNotFound);
1691        };
1692        let main_ret_ty = self.tcx.fn_sig(main_def_id).no_bound_vars().unwrap().output();
1693
1694        // Given that `main()` has no arguments,
1695        // then its return type cannot have
1696        // late-bound regions, since late-bound
1697        // regions must appear in the argument
1698        // listing.
1699        let main_ret_ty = self.tcx.normalize_erasing_regions(
1700            ty::TypingEnv::fully_monomorphized(),
1701            main_ret_ty.no_bound_vars().unwrap(),
1702        );
1703
1704        let start_instance = Instance::expect_resolve(
1705            self.tcx,
1706            ty::TypingEnv::fully_monomorphized(),
1707            start_def_id,
1708            self.tcx.mk_args(&[main_ret_ty.into()]),
1709            DUMMY_SP,
1710        );
1711
1712        self.output.push(create_fn_mono_item(self.tcx, start_instance, DUMMY_SP));
1713    }
1714}
1715
1716#[instrument(level = "debug", skip(tcx, output))]
1717fn create_mono_items_for_default_impls<'tcx>(
1718    tcx: TyCtxt<'tcx>,
1719    item: hir::ItemId,
1720    output: &mut MonoItems<'tcx>,
1721) {
1722    let impl_ = tcx.impl_trait_header(item.owner_id);
1723
1724    if impl_.polarity == ty::ImplPolarity::Negative {
1725        return;
1726    }
1727
1728    if tcx.generics_of(item.owner_id).own_requires_monomorphization() {
1729        return;
1730    }
1731
1732    // Lifetimes never affect trait selection, so we are allowed to eagerly
1733    // instantiate an instance of an impl method if the impl (and method,
1734    // which we check below) is only parameterized over lifetime. In that case,
1735    // we use the ReErased, which has no lifetime information associated with
1736    // it, to validate whether or not the impl is legal to instantiate at all.
1737    let only_region_params = |param: &ty::GenericParamDef, _: &_| match param.kind {
1738        GenericParamDefKind::Lifetime => tcx.lifetimes.re_erased.into(),
1739        GenericParamDefKind::Type { .. } | GenericParamDefKind::Const { .. } => {
1740            unreachable!(
1741                "`own_requires_monomorphization` check means that \
1742                we should have no type/const params"
1743            )
1744        }
1745    };
1746    let impl_args = GenericArgs::for_item(tcx, item.owner_id.to_def_id(), only_region_params);
1747    let trait_ref = impl_.trait_ref.instantiate(tcx, impl_args);
1748
1749    // Unlike 'lazy' monomorphization that begins by collecting items transitively
1750    // called by `main` or other global items, when eagerly monomorphizing impl
1751    // items, we never actually check that the predicates of this impl are satisfied
1752    // in a empty param env (i.e. with no assumptions).
1753    //
1754    // Even though this impl has no type or const generic parameters, because we don't
1755    // consider higher-ranked predicates such as `for<'a> &'a mut [u8]: Copy` to
1756    // be trivially false. We must now check that the impl has no impossible-to-satisfy
1757    // predicates.
1758    if tcx.instantiate_and_check_impossible_predicates((item.owner_id.to_def_id(), impl_args)) {
1759        return;
1760    }
1761
1762    let typing_env = ty::TypingEnv::fully_monomorphized();
1763    let trait_ref = tcx.normalize_erasing_regions(typing_env, trait_ref);
1764    let overridden_methods = tcx.impl_item_implementor_ids(item.owner_id);
1765    for method in tcx.provided_trait_methods(trait_ref.def_id) {
1766        if overridden_methods.contains_key(&method.def_id) {
1767            continue;
1768        }
1769
1770        if tcx.generics_of(method.def_id).own_requires_monomorphization() {
1771            continue;
1772        }
1773
1774        // As mentioned above, the method is legal to eagerly instantiate if it
1775        // only has lifetime generic parameters. This is validated by calling
1776        // `own_requires_monomorphization` on both the impl and method.
1777        let args = trait_ref.args.extend_to(tcx, method.def_id, only_region_params);
1778        let instance = ty::Instance::expect_resolve(tcx, typing_env, method.def_id, args, DUMMY_SP);
1779
1780        let mono_item = create_fn_mono_item(tcx, instance, DUMMY_SP);
1781        if mono_item.node.is_instantiable(tcx) && tcx.should_codegen_locally(instance) {
1782            output.push(mono_item);
1783        }
1784    }
1785}
1786
1787//=-----------------------------------------------------------------------------
1788// Top-level entry point, tying it all together
1789//=-----------------------------------------------------------------------------
1790
1791#[instrument(skip(tcx, strategy), level = "debug")]
1792pub(crate) fn collect_crate_mono_items<'tcx>(
1793    tcx: TyCtxt<'tcx>,
1794    strategy: MonoItemCollectionStrategy,
1795) -> (Vec<MonoItem<'tcx>>, UsageMap<'tcx>) {
1796    let _prof_timer = tcx.prof.generic_activity("monomorphization_collector");
1797
1798    let roots = tcx
1799        .sess
1800        .time("monomorphization_collector_root_collections", || collect_roots(tcx, strategy));
1801
1802    debug!("building mono item graph, beginning at roots");
1803
1804    let state = SharedState {
1805        visited: MTLock::new(UnordSet::default()),
1806        mentioned: MTLock::new(UnordSet::default()),
1807        usage_map: MTLock::new(UsageMap::new()),
1808    };
1809    let recursion_limit = tcx.recursion_limit();
1810
1811    tcx.sess.time("monomorphization_collector_graph_walk", || {
1812        par_for_each_in(roots, |root| {
1813            collect_items_root(tcx, dummy_spanned(*root), &state, recursion_limit);
1814        });
1815    });
1816
1817    // The set of MonoItems was created in an inherently indeterministic order because
1818    // of parallelism. We sort it here to ensure that the output is deterministic.
1819    let mono_items = tcx.with_stable_hashing_context(move |ref hcx| {
1820        state.visited.into_inner().into_sorted(hcx, true)
1821    });
1822
1823    (mono_items, state.usage_map.into_inner())
1824}
1825
1826pub(crate) fn provide(providers: &mut Providers) {
1827    providers.hooks.should_codegen_locally = should_codegen_locally;
1828    providers.items_of_instance = items_of_instance;
1829}