rustc_monomorphize/
collector.rs

1//! Mono Item Collection
2//! ====================
3//!
4//! This module is responsible for discovering all items that will contribute
5//! to code generation of the crate. The important part here is that it not only
6//! needs to find syntax-level items (functions, structs, etc) but also all
7//! their monomorphized instantiations. Every non-generic, non-const function
8//! maps to one LLVM artifact. Every generic function can produce
9//! from zero to N artifacts, depending on the sets of type arguments it
10//! is instantiated with.
11//! This also applies to generic items from other crates: A generic definition
12//! in crate X might produce monomorphizations that are compiled into crate Y.
13//! We also have to collect these here.
14//!
15//! The following kinds of "mono items" are handled here:
16//!
17//! - Functions
18//! - Methods
19//! - Closures
20//! - Statics
21//! - Drop glue
22//!
23//! The following things also result in LLVM artifacts, but are not collected
24//! here, since we instantiate them locally on demand when needed in a given
25//! codegen unit:
26//!
27//! - Constants
28//! - VTables
29//! - Object Shims
30//!
31//! The main entry point is `collect_crate_mono_items`, at the bottom of this file.
32//!
33//! General Algorithm
34//! -----------------
35//! Let's define some terms first:
36//!
37//! - A "mono item" is something that results in a function or global in
38//!   the LLVM IR of a codegen unit. Mono items do not stand on their
39//!   own, they can use other mono items. For example, if function
40//!   `foo()` calls function `bar()` then the mono item for `foo()`
41//!   uses the mono item for function `bar()`. In general, the
42//!   definition for mono item A using a mono item B is that
43//!   the LLVM artifact produced for A uses the LLVM artifact produced
44//!   for B.
45//!
46//! - Mono items and the uses between them form a directed graph,
47//!   where the mono items are the nodes and uses form the edges.
48//!   Let's call this graph the "mono item graph".
49//!
50//! - The mono item graph for a program contains all mono items
51//!   that are needed in order to produce the complete LLVM IR of the program.
52//!
53//! The purpose of the algorithm implemented in this module is to build the
54//! mono item graph for the current crate. It runs in two phases:
55//!
56//! 1. Discover the roots of the graph by traversing the HIR of the crate.
57//! 2. Starting from the roots, find uses by inspecting the MIR
58//!    representation of the item corresponding to a given node, until no more
59//!    new nodes are found.
60//!
61//! ### Discovering roots
62//! The roots of the mono item graph correspond to the public non-generic
63//! syntactic items in the source code. We find them by walking the HIR of the
64//! crate, and whenever we hit upon a public function, method, or static item,
65//! we create a mono item consisting of the items DefId and, since we only
66//! consider non-generic items, an empty type-parameters set. (In eager
67//! collection mode, during incremental compilation, all non-generic functions
68//! are considered as roots, as well as when the `-Clink-dead-code` option is
69//! specified. Functions marked `#[no_mangle]` and functions called by inlinable
70//! functions also always act as roots.)
71//!
72//! ### Finding uses
73//! Given a mono item node, we can discover uses by inspecting its MIR. We walk
74//! the MIR to find other mono items used by each mono item. Since the mono
75//! item we are currently at is always monomorphic, we also know the concrete
76//! type arguments of its used mono items. The specific forms a use can take in
77//! MIR are quite diverse. Here is an overview:
78//!
79//! #### Calling Functions/Methods
80//! The most obvious way for one mono item to use another is a
81//! function or method call (represented by a CALL terminator in MIR). But
82//! calls are not the only thing that might introduce a use between two
83//! function mono items, and as we will see below, they are just a
84//! specialization of the form described next, and consequently will not get any
85//! special treatment in the algorithm.
86//!
87//! #### Taking a reference to a function or method
88//! A function does not need to actually be called in order to be used by
89//! another function. It suffices to just take a reference in order to introduce
90//! an edge. Consider the following example:
91//!
92//! ```
93//! # use core::fmt::Display;
94//! fn print_val<T: Display>(x: T) {
95//!     println!("{}", x);
96//! }
97//!
98//! fn call_fn(f: &dyn Fn(i32), x: i32) {
99//!     f(x);
100//! }
101//!
102//! fn main() {
103//!     let print_i32 = print_val::<i32>;
104//!     call_fn(&print_i32, 0);
105//! }
106//! ```
107//! The MIR of none of these functions will contain an explicit call to
108//! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
109//! an instance of this function. Thus, whenever we encounter a function or
110//! method in operand position, we treat it as a use of the current
111//! mono item. Calls are just a special case of that.
112//!
113//! #### Drop glue
114//! Drop glue mono items are introduced by MIR drop-statements. The
115//! generated mono item will have additional drop-glue item uses if the
116//! type to be dropped contains nested values that also need to be dropped. It
117//! might also have a function item use for the explicit `Drop::drop`
118//! implementation of its type.
119//!
120//! #### Unsizing Casts
121//! A subtle way of introducing use edges is by casting to a trait object.
122//! Since the resulting wide-pointer contains a reference to a vtable, we need to
123//! instantiate all dyn-compatible methods of the trait, as we need to store
124//! pointers to these functions even if they never get called anywhere. This can
125//! be seen as a special case of taking a function reference.
126//!
127//!
128//! Interaction with Cross-Crate Inlining
129//! -------------------------------------
130//! The binary of a crate will not only contain machine code for the items
131//! defined in the source code of that crate. It will also contain monomorphic
132//! instantiations of any extern generic functions and of functions marked with
133//! `#[inline]`.
134//! The collection algorithm handles this more or less mono. If it is
135//! about to create a mono item for something with an external `DefId`,
136//! it will take a look if the MIR for that item is available, and if so just
137//! proceed normally. If the MIR is not available, it assumes that the item is
138//! just linked to and no node is created; which is exactly what we want, since
139//! no machine code should be generated in the current crate for such an item.
140//!
141//! Eager and Lazy Collection Strategy
142//! ----------------------------------
143//! Mono item collection can be performed with one of two strategies:
144//!
145//! - Lazy strategy means that items will only be instantiated when actually
146//!   used. The goal is to produce the least amount of machine code
147//!   possible.
148//!
149//! - Eager strategy is meant to be used in conjunction with incremental compilation
150//!   where a stable set of mono items is more important than a minimal
151//!   one. Thus, eager strategy will instantiate drop-glue for every drop-able type
152//!   in the crate, even if no drop call for that type exists (yet). It will
153//!   also instantiate default implementations of trait methods, something that
154//!   otherwise is only done on demand.
155//!
156//! Collection-time const evaluation and "mentioned" items
157//! ------------------------------------------------------
158//!
159//! One important role of collection is to evaluate all constants that are used by all the items
160//! which are being collected. Codegen can then rely on only encountering constants that evaluate
161//! successfully, and if a constant fails to evaluate, the collector has much better context to be
162//! able to show where this constant comes up.
163//!
164//! However, the exact set of "used" items (collected as described above), and therefore the exact
165//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
166//! a function call that uses a failing constant, so an unoptimized build may fail where an
167//! optimized build succeeds. This is undesirable.
168//!
169//! To avoid this, the collector has the concept of "mentioned" items. Some time during the MIR
170//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
171//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
172//! dead code and get optimized away (which makes them no longer "used"), they are still
173//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
174//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
175//! whether we are visiting a used item or merely a mentioned item.
176//!
177//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
178//! need to stay in sync in the following sense:
179//!
180//! - For every item that the collector gather that could eventually lead to build failure (most
181//!   likely due to containing a constant that fails to evaluate), a corresponding mentioned item
182//!   must be added. This should use the exact same strategy as the ecollector to make sure they are
183//!   in sync. However, while the collector works on monomorphized types, mentioned items are
184//!   collected on generic MIR -- so any time the collector checks for a particular type (such as
185//!   `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
186//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
187//!   would have done during regular MIR visiting. Basically you can think of the collector having
188//!   two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
189//!   literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
190//!   duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
191//!   `visit_mentioned_item`.
192//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
193//!   its MIR traversal with exactly what mentioned item gathering would have added in the same
194//!   situation. This detects mentioned items that have *not* been optimized away and hence don't
195//!   need a dedicated traversal.
196//!
197//! Open Issues
198//! -----------
199//! Some things are not yet fully implemented in the current version of this
200//! module.
201//!
202//! ### Const Fns
203//! Ideally, no mono item should be generated for const fns unless there
204//! is a call to them that cannot be evaluated at compile time. At the moment
205//! this is not implemented however: a mono item will be produced
206//! regardless of whether it is actually needed or not.
207
208use std::path::PathBuf;
209
210use rustc_attr_parsing::InlineAttr;
211use rustc_data_structures::fx::FxIndexMap;
212use rustc_data_structures::sync::{LRef, MTLock, par_for_each_in};
213use rustc_data_structures::unord::{UnordMap, UnordSet};
214use rustc_hir as hir;
215use rustc_hir::def::DefKind;
216use rustc_hir::def_id::{DefId, DefIdMap, LocalDefId};
217use rustc_hir::lang_items::LangItem;
218use rustc_middle::middle::codegen_fn_attrs::CodegenFnAttrFlags;
219use rustc_middle::mir::interpret::{AllocId, ErrorHandled, GlobalAlloc, Scalar};
220use rustc_middle::mir::mono::{CollectionMode, InstantiationMode, MonoItem};
221use rustc_middle::mir::visit::Visitor as MirVisitor;
222use rustc_middle::mir::{self, Location, MentionedItem, traversal};
223use rustc_middle::query::TyCtxtAt;
224use rustc_middle::ty::adjustment::{CustomCoerceUnsized, PointerCoercion};
225use rustc_middle::ty::layout::ValidityRequirement;
226use rustc_middle::ty::print::{shrunk_instance_name, with_no_trimmed_paths};
227use rustc_middle::ty::{
228    self, GenericArgs, GenericParamDefKind, Instance, InstanceKind, Interner, Ty, TyCtxt,
229    TypeFoldable, TypeVisitableExt, VtblEntry,
230};
231use rustc_middle::util::Providers;
232use rustc_middle::{bug, span_bug};
233use rustc_session::Limit;
234use rustc_session::config::EntryFnType;
235use rustc_span::source_map::{Spanned, dummy_spanned, respan};
236use rustc_span::{DUMMY_SP, Span};
237use tracing::{debug, instrument, trace};
238
239use crate::errors::{self, EncounteredErrorWhileInstantiating, NoOptimizedMir, RecursionLimit};
240
241#[derive(PartialEq)]
242pub(crate) enum MonoItemCollectionStrategy {
243    Eager,
244    Lazy,
245}
246
247/// The state that is shared across the concurrent threads that are doing collection.
248struct SharedState<'tcx> {
249    /// Items that have been or are currently being recursively collected.
250    visited: MTLock<UnordSet<MonoItem<'tcx>>>,
251    /// Items that have been or are currently being recursively treated as "mentioned", i.e., their
252    /// consts are evaluated but nothing is added to the collection.
253    mentioned: MTLock<UnordSet<MonoItem<'tcx>>>,
254    /// Which items are being used where, for better errors.
255    usage_map: MTLock<UsageMap<'tcx>>,
256}
257
258pub(crate) struct UsageMap<'tcx> {
259    // Maps every mono item to the mono items used by it.
260    pub used_map: UnordMap<MonoItem<'tcx>, Vec<MonoItem<'tcx>>>,
261
262    // Maps every mono item to the mono items that use it.
263    user_map: UnordMap<MonoItem<'tcx>, Vec<MonoItem<'tcx>>>,
264}
265
266impl<'tcx> UsageMap<'tcx> {
267    fn new() -> UsageMap<'tcx> {
268        UsageMap { used_map: Default::default(), user_map: Default::default() }
269    }
270
271    fn record_used<'a>(&mut self, user_item: MonoItem<'tcx>, used_items: &'a MonoItems<'tcx>)
272    where
273        'tcx: 'a,
274    {
275        for used_item in used_items.items() {
276            self.user_map.entry(used_item).or_default().push(user_item);
277        }
278
279        assert!(self.used_map.insert(user_item, used_items.items().collect()).is_none());
280    }
281
282    pub(crate) fn get_user_items(&self, item: MonoItem<'tcx>) -> &[MonoItem<'tcx>] {
283        self.user_map.get(&item).map(|items| items.as_slice()).unwrap_or(&[])
284    }
285
286    /// Internally iterate over all inlined items used by `item`.
287    pub(crate) fn for_each_inlined_used_item<F>(
288        &self,
289        tcx: TyCtxt<'tcx>,
290        item: MonoItem<'tcx>,
291        mut f: F,
292    ) where
293        F: FnMut(MonoItem<'tcx>),
294    {
295        let used_items = self.used_map.get(&item).unwrap();
296        for used_item in used_items.iter() {
297            let is_inlined = used_item.instantiation_mode(tcx) == InstantiationMode::LocalCopy;
298            if is_inlined {
299                f(*used_item);
300            }
301        }
302    }
303}
304
305struct MonoItems<'tcx> {
306    // We want a set of MonoItem + Span where trying to re-insert a MonoItem with a different Span
307    // is ignored. Map does that, but it looks odd.
308    items: FxIndexMap<MonoItem<'tcx>, Span>,
309}
310
311impl<'tcx> MonoItems<'tcx> {
312    fn new() -> Self {
313        Self { items: FxIndexMap::default() }
314    }
315
316    fn is_empty(&self) -> bool {
317        self.items.is_empty()
318    }
319
320    fn push(&mut self, item: Spanned<MonoItem<'tcx>>) {
321        // Insert only if the entry does not exist. A normal insert would stomp the first span that
322        // got inserted.
323        self.items.entry(item.node).or_insert(item.span);
324    }
325
326    fn items(&self) -> impl Iterator<Item = MonoItem<'tcx>> + '_ {
327        self.items.keys().cloned()
328    }
329}
330
331impl<'tcx> IntoIterator for MonoItems<'tcx> {
332    type Item = Spanned<MonoItem<'tcx>>;
333    type IntoIter = impl Iterator<Item = Spanned<MonoItem<'tcx>>>;
334
335    fn into_iter(self) -> Self::IntoIter {
336        self.items.into_iter().map(|(item, span)| respan(span, item))
337    }
338}
339
340impl<'tcx> Extend<Spanned<MonoItem<'tcx>>> for MonoItems<'tcx> {
341    fn extend<I>(&mut self, iter: I)
342    where
343        I: IntoIterator<Item = Spanned<MonoItem<'tcx>>>,
344    {
345        for item in iter {
346            self.push(item)
347        }
348    }
349}
350
351/// Collect all monomorphized items reachable from `starting_point`, and emit a note diagnostic if a
352/// post-monomorphization error is encountered during a collection step.
353///
354/// `mode` determined whether we are scanning for [used items][CollectionMode::UsedItems]
355/// or [mentioned items][CollectionMode::MentionedItems].
356#[instrument(skip(tcx, state, recursion_depths, recursion_limit), level = "debug")]
357fn collect_items_rec<'tcx>(
358    tcx: TyCtxt<'tcx>,
359    starting_item: Spanned<MonoItem<'tcx>>,
360    state: LRef<'_, SharedState<'tcx>>,
361    recursion_depths: &mut DefIdMap<usize>,
362    recursion_limit: Limit,
363    mode: CollectionMode,
364) {
365    if mode == CollectionMode::UsedItems {
366        if !state.visited.lock_mut().insert(starting_item.node) {
367            // We've been here already, no need to search again.
368            return;
369        }
370    } else {
371        if state.visited.lock().contains(&starting_item.node) {
372            // We've already done a *full* visit on this one, no need to do the "mention" visit.
373            return;
374        }
375        if !state.mentioned.lock_mut().insert(starting_item.node) {
376            // We've been here already, no need to search again.
377            return;
378        }
379        // There's some risk that we first do a 'mention' visit and then a full visit. But there's no
380        // harm in that, the mention visit will trigger all the queries and the results are cached.
381    }
382
383    let mut used_items = MonoItems::new();
384    let mut mentioned_items = MonoItems::new();
385    let recursion_depth_reset;
386
387    // Post-monomorphization errors MVP
388    //
389    // We can encounter errors while monomorphizing an item, but we don't have a good way of
390    // showing a complete stack of spans ultimately leading to collecting the erroneous one yet.
391    // (It's also currently unclear exactly which diagnostics and information would be interesting
392    // to report in such cases)
393    //
394    // This leads to suboptimal error reporting: a post-monomorphization error (PME) will be
395    // shown with just a spanned piece of code causing the error, without information on where
396    // it was called from. This is especially obscure if the erroneous mono item is in a
397    // dependency. See for example issue #85155, where, before minimization, a PME happened two
398    // crates downstream from libcore's stdarch, without a way to know which dependency was the
399    // cause.
400    //
401    // If such an error occurs in the current crate, its span will be enough to locate the
402    // source. If the cause is in another crate, the goal here is to quickly locate which mono
403    // item in the current crate is ultimately responsible for causing the error.
404    //
405    // To give at least _some_ context to the user: while collecting mono items, we check the
406    // error count. If it has changed, a PME occurred, and we trigger some diagnostics about the
407    // current step of mono items collection.
408    //
409    // FIXME: don't rely on global state, instead bubble up errors. Note: this is very hard to do.
410    let error_count = tcx.dcx().err_count();
411
412    // In `mentioned_items` we collect items that were mentioned in this MIR but possibly do not
413    // need to be monomorphized. This is done to ensure that optimizing away function calls does not
414    // hide const-eval errors that those calls would otherwise have triggered.
415    match starting_item.node {
416        MonoItem::Static(def_id) => {
417            recursion_depth_reset = None;
418
419            // Statics always get evaluated (which is possible because they can't be generic), so for
420            // `MentionedItems` collection there's nothing to do here.
421            if mode == CollectionMode::UsedItems {
422                let instance = Instance::mono(tcx, def_id);
423
424                // Sanity check whether this ended up being collected accidentally
425                debug_assert!(tcx.should_codegen_locally(instance));
426
427                let DefKind::Static { nested, .. } = tcx.def_kind(def_id) else { bug!() };
428                // Nested statics have no type.
429                if !nested {
430                    let ty = instance.ty(tcx, ty::TypingEnv::fully_monomorphized());
431                    visit_drop_use(tcx, ty, true, starting_item.span, &mut used_items);
432                }
433
434                if let Ok(alloc) = tcx.eval_static_initializer(def_id) {
435                    for &prov in alloc.inner().provenance().ptrs().values() {
436                        collect_alloc(tcx, prov.alloc_id(), &mut used_items);
437                    }
438                }
439
440                if tcx.needs_thread_local_shim(def_id) {
441                    used_items.push(respan(
442                        starting_item.span,
443                        MonoItem::Fn(Instance {
444                            def: InstanceKind::ThreadLocalShim(def_id),
445                            args: GenericArgs::empty(),
446                        }),
447                    ));
448                }
449            }
450
451            // mentioned_items stays empty since there's no codegen for statics. statics don't get
452            // optimized, and if they did then the const-eval interpreter would have to worry about
453            // mentioned_items.
454        }
455        MonoItem::Fn(instance) => {
456            // Sanity check whether this ended up being collected accidentally
457            debug_assert!(tcx.should_codegen_locally(instance));
458
459            // Keep track of the monomorphization recursion depth
460            recursion_depth_reset = Some(check_recursion_limit(
461                tcx,
462                instance,
463                starting_item.span,
464                recursion_depths,
465                recursion_limit,
466            ));
467
468            rustc_data_structures::stack::ensure_sufficient_stack(|| {
469                let (used, mentioned) = tcx.items_of_instance((instance, mode));
470                used_items.extend(used.into_iter().copied());
471                mentioned_items.extend(mentioned.into_iter().copied());
472            });
473        }
474        MonoItem::GlobalAsm(item_id) => {
475            assert!(
476                mode == CollectionMode::UsedItems,
477                "should never encounter global_asm when collecting mentioned items"
478            );
479            recursion_depth_reset = None;
480
481            let item = tcx.hir().item(item_id);
482            if let hir::ItemKind::GlobalAsm(asm) = item.kind {
483                for (op, op_sp) in asm.operands {
484                    match op {
485                        hir::InlineAsmOperand::Const { .. } => {
486                            // Only constants which resolve to a plain integer
487                            // are supported. Therefore the value should not
488                            // depend on any other items.
489                        }
490                        hir::InlineAsmOperand::SymFn { anon_const } => {
491                            let fn_ty =
492                                tcx.typeck_body(anon_const.body).node_type(anon_const.hir_id);
493                            visit_fn_use(tcx, fn_ty, false, *op_sp, &mut used_items);
494                        }
495                        hir::InlineAsmOperand::SymStatic { path: _, def_id } => {
496                            let instance = Instance::mono(tcx, *def_id);
497                            if tcx.should_codegen_locally(instance) {
498                                trace!("collecting static {:?}", def_id);
499                                used_items.push(dummy_spanned(MonoItem::Static(*def_id)));
500                            }
501                        }
502                        hir::InlineAsmOperand::In { .. }
503                        | hir::InlineAsmOperand::Out { .. }
504                        | hir::InlineAsmOperand::InOut { .. }
505                        | hir::InlineAsmOperand::SplitInOut { .. }
506                        | hir::InlineAsmOperand::Label { .. } => {
507                            span_bug!(*op_sp, "invalid operand type for global_asm!")
508                        }
509                    }
510                }
511            } else {
512                span_bug!(item.span, "Mismatch between hir::Item type and MonoItem type")
513            }
514
515            // mention_items stays empty as nothing gets optimized here.
516        }
517    };
518
519    // Check for PMEs and emit a diagnostic if one happened. To try to show relevant edges of the
520    // mono item graph.
521    if tcx.dcx().err_count() > error_count
522        && starting_item.node.is_generic_fn()
523        && starting_item.node.is_user_defined()
524    {
525        let formatted_item = with_no_trimmed_paths!(starting_item.node.to_string());
526        tcx.dcx().emit_note(EncounteredErrorWhileInstantiating {
527            span: starting_item.span,
528            formatted_item,
529        });
530    }
531    // Only updating `usage_map` for used items as otherwise we may be inserting the same item
532    // multiple times (if it is first 'mentioned' and then later actuall used), and the usage map
533    // logic does not like that.
534    // This is part of the output of collection and hence only relevant for "used" items.
535    // ("Mentioned" items are only considered internally during collection.)
536    if mode == CollectionMode::UsedItems {
537        state.usage_map.lock_mut().record_used(starting_item.node, &used_items);
538    }
539
540    if mode == CollectionMode::MentionedItems {
541        assert!(used_items.is_empty(), "'mentioned' collection should never encounter used items");
542    } else {
543        for used_item in used_items {
544            collect_items_rec(
545                tcx,
546                used_item,
547                state,
548                recursion_depths,
549                recursion_limit,
550                CollectionMode::UsedItems,
551            );
552        }
553    }
554
555    // Walk over mentioned items *after* used items, so that if an item is both mentioned and used then
556    // the loop above has fully collected it, so this loop will skip it.
557    for mentioned_item in mentioned_items {
558        collect_items_rec(
559            tcx,
560            mentioned_item,
561            state,
562            recursion_depths,
563            recursion_limit,
564            CollectionMode::MentionedItems,
565        );
566    }
567
568    if let Some((def_id, depth)) = recursion_depth_reset {
569        recursion_depths.insert(def_id, depth);
570    }
571}
572
573fn check_recursion_limit<'tcx>(
574    tcx: TyCtxt<'tcx>,
575    instance: Instance<'tcx>,
576    span: Span,
577    recursion_depths: &mut DefIdMap<usize>,
578    recursion_limit: Limit,
579) -> (DefId, usize) {
580    let def_id = instance.def_id();
581    let recursion_depth = recursion_depths.get(&def_id).cloned().unwrap_or(0);
582    debug!(" => recursion depth={}", recursion_depth);
583
584    let adjusted_recursion_depth = if tcx.is_lang_item(def_id, LangItem::DropInPlace) {
585        // HACK: drop_in_place creates tight monomorphization loops. Give
586        // it more margin.
587        recursion_depth / 4
588    } else {
589        recursion_depth
590    };
591
592    // Code that needs to instantiate the same function recursively
593    // more than the recursion limit is assumed to be causing an
594    // infinite expansion.
595    if !recursion_limit.value_within_limit(adjusted_recursion_depth) {
596        let def_span = tcx.def_span(def_id);
597        let def_path_str = tcx.def_path_str(def_id);
598        let (shrunk, written_to_path) = shrunk_instance_name(tcx, instance);
599        let mut path = PathBuf::new();
600        let was_written = if let Some(written_to_path) = written_to_path {
601            path = written_to_path;
602            true
603        } else {
604            false
605        };
606        tcx.dcx().emit_fatal(RecursionLimit {
607            span,
608            shrunk,
609            def_span,
610            def_path_str,
611            was_written,
612            path,
613        });
614    }
615
616    recursion_depths.insert(def_id, recursion_depth + 1);
617
618    (def_id, recursion_depth)
619}
620
621struct MirUsedCollector<'a, 'tcx> {
622    tcx: TyCtxt<'tcx>,
623    body: &'a mir::Body<'tcx>,
624    used_items: &'a mut MonoItems<'tcx>,
625    /// See the comment in `collect_items_of_instance` for the purpose of this set.
626    /// Note that this contains *not-monomorphized* items!
627    used_mentioned_items: &'a mut UnordSet<MentionedItem<'tcx>>,
628    instance: Instance<'tcx>,
629}
630
631impl<'a, 'tcx> MirUsedCollector<'a, 'tcx> {
632    fn monomorphize<T>(&self, value: T) -> T
633    where
634        T: TypeFoldable<TyCtxt<'tcx>>,
635    {
636        trace!("monomorphize: self.instance={:?}", self.instance);
637        self.instance.instantiate_mir_and_normalize_erasing_regions(
638            self.tcx,
639            ty::TypingEnv::fully_monomorphized(),
640            ty::EarlyBinder::bind(value),
641        )
642    }
643
644    /// Evaluates a *not yet monomorphized* constant.
645    fn eval_constant(
646        &mut self,
647        constant: &mir::ConstOperand<'tcx>,
648    ) -> Option<mir::ConstValue<'tcx>> {
649        let const_ = self.monomorphize(constant.const_);
650        // Evaluate the constant. This makes const eval failure a collection-time error (rather than
651        // a codegen-time error). rustc stops after collection if there was an error, so this
652        // ensures codegen never has to worry about failing consts.
653        // (codegen relies on this and ICEs will happen if this is violated.)
654        match const_.eval(self.tcx, ty::TypingEnv::fully_monomorphized(), constant.span) {
655            Ok(v) => Some(v),
656            Err(ErrorHandled::TooGeneric(..)) => span_bug!(
657                constant.span,
658                "collection encountered polymorphic constant: {:?}",
659                const_
660            ),
661            Err(err @ ErrorHandled::Reported(..)) => {
662                err.emit_note(self.tcx);
663                return None;
664            }
665        }
666    }
667}
668
669impl<'a, 'tcx> MirVisitor<'tcx> for MirUsedCollector<'a, 'tcx> {
670    fn visit_rvalue(&mut self, rvalue: &mir::Rvalue<'tcx>, location: Location) {
671        debug!("visiting rvalue {:?}", *rvalue);
672
673        let span = self.body.source_info(location).span;
674
675        match *rvalue {
676            // When doing an cast from a regular pointer to a wide pointer, we
677            // have to instantiate all methods of the trait being cast to, so we
678            // can build the appropriate vtable.
679            mir::Rvalue::Cast(
680                mir::CastKind::PointerCoercion(PointerCoercion::Unsize, _)
681                | mir::CastKind::PointerCoercion(PointerCoercion::DynStar, _),
682                ref operand,
683                target_ty,
684            ) => {
685                let source_ty = operand.ty(self.body, self.tcx);
686                // *Before* monomorphizing, record that we already handled this mention.
687                self.used_mentioned_items
688                    .insert(MentionedItem::UnsizeCast { source_ty, target_ty });
689                let target_ty = self.monomorphize(target_ty);
690                let source_ty = self.monomorphize(source_ty);
691                let (source_ty, target_ty) =
692                    find_vtable_types_for_unsizing(self.tcx.at(span), source_ty, target_ty);
693                // This could also be a different Unsize instruction, like
694                // from a fixed sized array to a slice. But we are only
695                // interested in things that produce a vtable.
696                if (target_ty.is_trait() && !source_ty.is_trait())
697                    || (target_ty.is_dyn_star() && !source_ty.is_dyn_star())
698                {
699                    create_mono_items_for_vtable_methods(
700                        self.tcx,
701                        target_ty,
702                        source_ty,
703                        span,
704                        self.used_items,
705                    );
706                }
707            }
708            mir::Rvalue::Cast(
709                mir::CastKind::PointerCoercion(PointerCoercion::ReifyFnPointer, _),
710                ref operand,
711                _,
712            ) => {
713                let fn_ty = operand.ty(self.body, self.tcx);
714                // *Before* monomorphizing, record that we already handled this mention.
715                self.used_mentioned_items.insert(MentionedItem::Fn(fn_ty));
716                let fn_ty = self.monomorphize(fn_ty);
717                visit_fn_use(self.tcx, fn_ty, false, span, self.used_items);
718            }
719            mir::Rvalue::Cast(
720                mir::CastKind::PointerCoercion(PointerCoercion::ClosureFnPointer(_), _),
721                ref operand,
722                _,
723            ) => {
724                let source_ty = operand.ty(self.body, self.tcx);
725                // *Before* monomorphizing, record that we already handled this mention.
726                self.used_mentioned_items.insert(MentionedItem::Closure(source_ty));
727                let source_ty = self.monomorphize(source_ty);
728                if let ty::Closure(def_id, args) = *source_ty.kind() {
729                    let instance =
730                        Instance::resolve_closure(self.tcx, def_id, args, ty::ClosureKind::FnOnce);
731                    if self.tcx.should_codegen_locally(instance) {
732                        self.used_items.push(create_fn_mono_item(self.tcx, instance, span));
733                    }
734                } else {
735                    bug!()
736                }
737            }
738            mir::Rvalue::ThreadLocalRef(def_id) => {
739                assert!(self.tcx.is_thread_local_static(def_id));
740                let instance = Instance::mono(self.tcx, def_id);
741                if self.tcx.should_codegen_locally(instance) {
742                    trace!("collecting thread-local static {:?}", def_id);
743                    self.used_items.push(respan(span, MonoItem::Static(def_id)));
744                }
745            }
746            _ => { /* not interesting */ }
747        }
748
749        self.super_rvalue(rvalue, location);
750    }
751
752    /// This does not walk the MIR of the constant as that is not needed for codegen, all we need is
753    /// to ensure that the constant evaluates successfully and walk the result.
754    #[instrument(skip(self), level = "debug")]
755    fn visit_const_operand(&mut self, constant: &mir::ConstOperand<'tcx>, location: Location) {
756        // No `super_constant` as we don't care about `visit_ty`/`visit_ty_const`.
757        let Some(val) = self.eval_constant(constant) else { return };
758        collect_const_value(self.tcx, val, self.used_items);
759    }
760
761    fn visit_terminator(&mut self, terminator: &mir::Terminator<'tcx>, location: Location) {
762        debug!("visiting terminator {:?} @ {:?}", terminator, location);
763        let source = self.body.source_info(location).span;
764
765        let tcx = self.tcx;
766        let push_mono_lang_item = |this: &mut Self, lang_item: LangItem| {
767            let instance = Instance::mono(tcx, tcx.require_lang_item(lang_item, Some(source)));
768            if tcx.should_codegen_locally(instance) {
769                this.used_items.push(create_fn_mono_item(tcx, instance, source));
770            }
771        };
772
773        match terminator.kind {
774            mir::TerminatorKind::Call { ref func, .. }
775            | mir::TerminatorKind::TailCall { ref func, .. } => {
776                let callee_ty = func.ty(self.body, tcx);
777                // *Before* monomorphizing, record that we already handled this mention.
778                self.used_mentioned_items.insert(MentionedItem::Fn(callee_ty));
779                let callee_ty = self.monomorphize(callee_ty);
780                visit_fn_use(self.tcx, callee_ty, true, source, &mut self.used_items)
781            }
782            mir::TerminatorKind::Drop { ref place, .. } => {
783                let ty = place.ty(self.body, self.tcx).ty;
784                // *Before* monomorphizing, record that we already handled this mention.
785                self.used_mentioned_items.insert(MentionedItem::Drop(ty));
786                let ty = self.monomorphize(ty);
787                visit_drop_use(self.tcx, ty, true, source, self.used_items);
788            }
789            mir::TerminatorKind::InlineAsm { ref operands, .. } => {
790                for op in operands {
791                    match *op {
792                        mir::InlineAsmOperand::SymFn { ref value } => {
793                            let fn_ty = value.const_.ty();
794                            // *Before* monomorphizing, record that we already handled this mention.
795                            self.used_mentioned_items.insert(MentionedItem::Fn(fn_ty));
796                            let fn_ty = self.monomorphize(fn_ty);
797                            visit_fn_use(self.tcx, fn_ty, false, source, self.used_items);
798                        }
799                        mir::InlineAsmOperand::SymStatic { def_id } => {
800                            let instance = Instance::mono(self.tcx, def_id);
801                            if self.tcx.should_codegen_locally(instance) {
802                                trace!("collecting asm sym static {:?}", def_id);
803                                self.used_items.push(respan(source, MonoItem::Static(def_id)));
804                            }
805                        }
806                        _ => {}
807                    }
808                }
809            }
810            mir::TerminatorKind::Assert { ref msg, .. } => match &**msg {
811                mir::AssertKind::BoundsCheck { .. } => {
812                    push_mono_lang_item(self, LangItem::PanicBoundsCheck);
813                }
814                mir::AssertKind::MisalignedPointerDereference { .. } => {
815                    push_mono_lang_item(self, LangItem::PanicMisalignedPointerDereference);
816                }
817                mir::AssertKind::NullPointerDereference => {
818                    push_mono_lang_item(self, LangItem::PanicNullPointerDereference);
819                }
820                _ => {
821                    push_mono_lang_item(self, msg.panic_function());
822                }
823            },
824            mir::TerminatorKind::UnwindTerminate(reason) => {
825                push_mono_lang_item(self, reason.lang_item());
826            }
827            mir::TerminatorKind::Goto { .. }
828            | mir::TerminatorKind::SwitchInt { .. }
829            | mir::TerminatorKind::UnwindResume
830            | mir::TerminatorKind::Return
831            | mir::TerminatorKind::Unreachable => {}
832            mir::TerminatorKind::CoroutineDrop
833            | mir::TerminatorKind::Yield { .. }
834            | mir::TerminatorKind::FalseEdge { .. }
835            | mir::TerminatorKind::FalseUnwind { .. } => bug!(),
836        }
837
838        if let Some(mir::UnwindAction::Terminate(reason)) = terminator.unwind() {
839            push_mono_lang_item(self, reason.lang_item());
840        }
841
842        self.super_terminator(terminator, location);
843    }
844}
845
846fn visit_drop_use<'tcx>(
847    tcx: TyCtxt<'tcx>,
848    ty: Ty<'tcx>,
849    is_direct_call: bool,
850    source: Span,
851    output: &mut MonoItems<'tcx>,
852) {
853    let instance = Instance::resolve_drop_in_place(tcx, ty);
854    visit_instance_use(tcx, instance, is_direct_call, source, output);
855}
856
857/// For every call of this function in the visitor, make sure there is a matching call in the
858/// `mentioned_items` pass!
859fn visit_fn_use<'tcx>(
860    tcx: TyCtxt<'tcx>,
861    ty: Ty<'tcx>,
862    is_direct_call: bool,
863    source: Span,
864    output: &mut MonoItems<'tcx>,
865) {
866    if let ty::FnDef(def_id, args) = *ty.kind() {
867        let instance = if is_direct_call {
868            ty::Instance::expect_resolve(
869                tcx,
870                ty::TypingEnv::fully_monomorphized(),
871                def_id,
872                args,
873                source,
874            )
875        } else {
876            match ty::Instance::resolve_for_fn_ptr(
877                tcx,
878                ty::TypingEnv::fully_monomorphized(),
879                def_id,
880                args,
881            ) {
882                Some(instance) => instance,
883                _ => bug!("failed to resolve instance for {ty}"),
884            }
885        };
886        visit_instance_use(tcx, instance, is_direct_call, source, output);
887    }
888}
889
890fn visit_instance_use<'tcx>(
891    tcx: TyCtxt<'tcx>,
892    instance: ty::Instance<'tcx>,
893    is_direct_call: bool,
894    source: Span,
895    output: &mut MonoItems<'tcx>,
896) {
897    debug!("visit_item_use({:?}, is_direct_call={:?})", instance, is_direct_call);
898    if !tcx.should_codegen_locally(instance) {
899        return;
900    }
901    if let Some(intrinsic) = tcx.intrinsic(instance.def_id()) {
902        if let Some(_requirement) = ValidityRequirement::from_intrinsic(intrinsic.name) {
903            // The intrinsics assert_inhabited, assert_zero_valid, and assert_mem_uninitialized_valid will
904            // be lowered in codegen to nothing or a call to panic_nounwind. So if we encounter any
905            // of those intrinsics, we need to include a mono item for panic_nounwind, else we may try to
906            // codegen a call to that function without generating code for the function itself.
907            let def_id = tcx.require_lang_item(LangItem::PanicNounwind, None);
908            let panic_instance = Instance::mono(tcx, def_id);
909            if tcx.should_codegen_locally(panic_instance) {
910                output.push(create_fn_mono_item(tcx, panic_instance, source));
911            }
912        } else if !intrinsic.must_be_overridden {
913            // Codegen the fallback body of intrinsics with fallback bodies.
914            // We explicitly skip this otherwise to ensure we get a linker error
915            // if anyone tries to call this intrinsic and the codegen backend did not
916            // override the implementation.
917            let instance = ty::Instance::new(instance.def_id(), instance.args);
918            if tcx.should_codegen_locally(instance) {
919                output.push(create_fn_mono_item(tcx, instance, source));
920            }
921        }
922    }
923
924    match instance.def {
925        ty::InstanceKind::Virtual(..) | ty::InstanceKind::Intrinsic(_) => {
926            if !is_direct_call {
927                bug!("{:?} being reified", instance);
928            }
929        }
930        ty::InstanceKind::ThreadLocalShim(..) => {
931            bug!("{:?} being reified", instance);
932        }
933        ty::InstanceKind::DropGlue(_, None) | ty::InstanceKind::AsyncDropGlueCtorShim(_, None) => {
934            // Don't need to emit noop drop glue if we are calling directly.
935            if !is_direct_call {
936                output.push(create_fn_mono_item(tcx, instance, source));
937            }
938        }
939        ty::InstanceKind::DropGlue(_, Some(_))
940        | ty::InstanceKind::AsyncDropGlueCtorShim(_, Some(_))
941        | ty::InstanceKind::VTableShim(..)
942        | ty::InstanceKind::ReifyShim(..)
943        | ty::InstanceKind::ClosureOnceShim { .. }
944        | ty::InstanceKind::ConstructCoroutineInClosureShim { .. }
945        | ty::InstanceKind::Item(..)
946        | ty::InstanceKind::FnPtrShim(..)
947        | ty::InstanceKind::CloneShim(..)
948        | ty::InstanceKind::FnPtrAddrShim(..) => {
949            output.push(create_fn_mono_item(tcx, instance, source));
950        }
951    }
952}
953
954/// Returns `true` if we should codegen an instance in the local crate, or returns `false` if we
955/// can just link to the upstream crate and therefore don't need a mono item.
956fn should_codegen_locally<'tcx>(tcx: TyCtxt<'tcx>, instance: Instance<'tcx>) -> bool {
957    let Some(def_id) = instance.def.def_id_if_not_guaranteed_local_codegen() else {
958        return true;
959    };
960
961    if tcx.is_foreign_item(def_id) {
962        // Foreign items are always linked against, there's no way of instantiating them.
963        return false;
964    }
965
966    if tcx.def_kind(def_id).has_codegen_attrs()
967        && matches!(tcx.codegen_fn_attrs(def_id).inline, InlineAttr::Force { .. })
968    {
969        // `#[rustc_force_inline]` items should never be codegened. This should be caught by
970        // the MIR validator.
971        tcx.delay_bug("attempt to codegen `#[rustc_force_inline]` item");
972    }
973
974    if def_id.is_local() {
975        // Local items cannot be referred to locally without monomorphizing them locally.
976        return true;
977    }
978
979    if tcx.is_reachable_non_generic(def_id) || instance.upstream_monomorphization(tcx).is_some() {
980        // We can link to the item in question, no instance needed in this crate.
981        return false;
982    }
983
984    if let DefKind::Static { .. } = tcx.def_kind(def_id) {
985        // We cannot monomorphize statics from upstream crates.
986        return false;
987    }
988
989    if !tcx.is_mir_available(def_id) {
990        tcx.dcx().emit_fatal(NoOptimizedMir {
991            span: tcx.def_span(def_id),
992            crate_name: tcx.crate_name(def_id.krate),
993        });
994    }
995
996    true
997}
998
999/// For a given pair of source and target type that occur in an unsizing coercion,
1000/// this function finds the pair of types that determines the vtable linking
1001/// them.
1002///
1003/// For example, the source type might be `&SomeStruct` and the target type
1004/// might be `&dyn SomeTrait` in a cast like:
1005///
1006/// ```rust,ignore (not real code)
1007/// let src: &SomeStruct = ...;
1008/// let target = src as &dyn SomeTrait;
1009/// ```
1010///
1011/// Then the output of this function would be (SomeStruct, SomeTrait) since for
1012/// constructing the `target` wide-pointer we need the vtable for that pair.
1013///
1014/// Things can get more complicated though because there's also the case where
1015/// the unsized type occurs as a field:
1016///
1017/// ```rust
1018/// struct ComplexStruct<T: ?Sized> {
1019///    a: u32,
1020///    b: f64,
1021///    c: T
1022/// }
1023/// ```
1024///
1025/// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
1026/// is unsized, `&SomeStruct` is a wide pointer, and the vtable it points to is
1027/// for the pair of `T` (which is a trait) and the concrete type that `T` was
1028/// originally coerced from:
1029///
1030/// ```rust,ignore (not real code)
1031/// let src: &ComplexStruct<SomeStruct> = ...;
1032/// let target = src as &ComplexStruct<dyn SomeTrait>;
1033/// ```
1034///
1035/// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
1036/// `(SomeStruct, SomeTrait)`.
1037///
1038/// Finally, there is also the case of custom unsizing coercions, e.g., for
1039/// smart pointers such as `Rc` and `Arc`.
1040fn find_vtable_types_for_unsizing<'tcx>(
1041    tcx: TyCtxtAt<'tcx>,
1042    source_ty: Ty<'tcx>,
1043    target_ty: Ty<'tcx>,
1044) -> (Ty<'tcx>, Ty<'tcx>) {
1045    let ptr_vtable = |inner_source: Ty<'tcx>, inner_target: Ty<'tcx>| {
1046        let typing_env = ty::TypingEnv::fully_monomorphized();
1047        let type_has_metadata = |ty: Ty<'tcx>| -> bool {
1048            if ty.is_sized(tcx.tcx, typing_env) {
1049                return false;
1050            }
1051            let tail = tcx.struct_tail_for_codegen(ty, typing_env);
1052            match tail.kind() {
1053                ty::Foreign(..) => false,
1054                ty::Str | ty::Slice(..) | ty::Dynamic(..) => true,
1055                _ => bug!("unexpected unsized tail: {:?}", tail),
1056            }
1057        };
1058        if type_has_metadata(inner_source) {
1059            (inner_source, inner_target)
1060        } else {
1061            tcx.struct_lockstep_tails_for_codegen(inner_source, inner_target, typing_env)
1062        }
1063    };
1064
1065    match (source_ty.kind(), target_ty.kind()) {
1066        (&ty::Ref(_, a, _), &ty::Ref(_, b, _) | &ty::RawPtr(b, _))
1067        | (&ty::RawPtr(a, _), &ty::RawPtr(b, _)) => ptr_vtable(a, b),
1068        (_, _)
1069            if let Some(source_boxed) = source_ty.boxed_ty()
1070                && let Some(target_boxed) = target_ty.boxed_ty() =>
1071        {
1072            ptr_vtable(source_boxed, target_boxed)
1073        }
1074
1075        // T as dyn* Trait
1076        (_, &ty::Dynamic(_, _, ty::DynStar)) => ptr_vtable(source_ty, target_ty),
1077
1078        (&ty::Adt(source_adt_def, source_args), &ty::Adt(target_adt_def, target_args)) => {
1079            assert_eq!(source_adt_def, target_adt_def);
1080
1081            let CustomCoerceUnsized::Struct(coerce_index) =
1082                match crate::custom_coerce_unsize_info(tcx, source_ty, target_ty) {
1083                    Ok(ccu) => ccu,
1084                    Err(e) => {
1085                        let e = Ty::new_error(tcx.tcx, e);
1086                        return (e, e);
1087                    }
1088                };
1089
1090            let source_fields = &source_adt_def.non_enum_variant().fields;
1091            let target_fields = &target_adt_def.non_enum_variant().fields;
1092
1093            assert!(
1094                coerce_index.index() < source_fields.len()
1095                    && source_fields.len() == target_fields.len()
1096            );
1097
1098            find_vtable_types_for_unsizing(
1099                tcx,
1100                source_fields[coerce_index].ty(*tcx, source_args),
1101                target_fields[coerce_index].ty(*tcx, target_args),
1102            )
1103        }
1104        _ => bug!(
1105            "find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?}",
1106            source_ty,
1107            target_ty
1108        ),
1109    }
1110}
1111
1112#[instrument(skip(tcx), level = "debug", ret)]
1113fn create_fn_mono_item<'tcx>(
1114    tcx: TyCtxt<'tcx>,
1115    instance: Instance<'tcx>,
1116    source: Span,
1117) -> Spanned<MonoItem<'tcx>> {
1118    let def_id = instance.def_id();
1119    if tcx.sess.opts.unstable_opts.profile_closures
1120        && def_id.is_local()
1121        && tcx.is_closure_like(def_id)
1122    {
1123        crate::util::dump_closure_profile(tcx, instance);
1124    }
1125
1126    respan(source, MonoItem::Fn(instance))
1127}
1128
1129/// Creates a `MonoItem` for each method that is referenced by the vtable for
1130/// the given trait/impl pair.
1131fn create_mono_items_for_vtable_methods<'tcx>(
1132    tcx: TyCtxt<'tcx>,
1133    trait_ty: Ty<'tcx>,
1134    impl_ty: Ty<'tcx>,
1135    source: Span,
1136    output: &mut MonoItems<'tcx>,
1137) {
1138    assert!(!trait_ty.has_escaping_bound_vars() && !impl_ty.has_escaping_bound_vars());
1139
1140    let ty::Dynamic(trait_ty, ..) = trait_ty.kind() else {
1141        bug!("create_mono_items_for_vtable_methods: {trait_ty:?} not a trait type");
1142    };
1143    if let Some(principal) = trait_ty.principal() {
1144        let trait_ref =
1145            tcx.instantiate_bound_regions_with_erased(principal.with_self_ty(tcx, impl_ty));
1146        assert!(!trait_ref.has_escaping_bound_vars());
1147
1148        // Walk all methods of the trait, including those of its supertraits
1149        let entries = tcx.vtable_entries(trait_ref);
1150        debug!(?entries);
1151        let methods = entries
1152            .iter()
1153            .filter_map(|entry| match entry {
1154                VtblEntry::MetadataDropInPlace
1155                | VtblEntry::MetadataSize
1156                | VtblEntry::MetadataAlign
1157                | VtblEntry::Vacant => None,
1158                VtblEntry::TraitVPtr(_) => {
1159                    // all super trait items already covered, so skip them.
1160                    None
1161                }
1162                VtblEntry::Method(instance) => {
1163                    Some(*instance).filter(|instance| tcx.should_codegen_locally(*instance))
1164                }
1165            })
1166            .map(|item| create_fn_mono_item(tcx, item, source));
1167        output.extend(methods);
1168    }
1169
1170    // Also add the destructor.
1171    visit_drop_use(tcx, impl_ty, false, source, output);
1172}
1173
1174/// Scans the CTFE alloc in order to find function pointers and statics that must be monomorphized.
1175fn collect_alloc<'tcx>(tcx: TyCtxt<'tcx>, alloc_id: AllocId, output: &mut MonoItems<'tcx>) {
1176    match tcx.global_alloc(alloc_id) {
1177        GlobalAlloc::Static(def_id) => {
1178            assert!(!tcx.is_thread_local_static(def_id));
1179            let instance = Instance::mono(tcx, def_id);
1180            if tcx.should_codegen_locally(instance) {
1181                trace!("collecting static {:?}", def_id);
1182                output.push(dummy_spanned(MonoItem::Static(def_id)));
1183            }
1184        }
1185        GlobalAlloc::Memory(alloc) => {
1186            trace!("collecting {:?} with {:#?}", alloc_id, alloc);
1187            let ptrs = alloc.inner().provenance().ptrs();
1188            // avoid `ensure_sufficient_stack` in the common case of "no pointers"
1189            if !ptrs.is_empty() {
1190                rustc_data_structures::stack::ensure_sufficient_stack(move || {
1191                    for &prov in ptrs.values() {
1192                        collect_alloc(tcx, prov.alloc_id(), output);
1193                    }
1194                });
1195            }
1196        }
1197        GlobalAlloc::Function { instance, .. } => {
1198            if tcx.should_codegen_locally(instance) {
1199                trace!("collecting {:?} with {:#?}", alloc_id, instance);
1200                output.push(create_fn_mono_item(tcx, instance, DUMMY_SP));
1201            }
1202        }
1203        GlobalAlloc::VTable(ty, dyn_ty) => {
1204            let alloc_id = tcx.vtable_allocation((
1205                ty,
1206                dyn_ty
1207                    .principal()
1208                    .map(|principal| tcx.instantiate_bound_regions_with_erased(principal)),
1209            ));
1210            collect_alloc(tcx, alloc_id, output)
1211        }
1212    }
1213}
1214
1215/// Scans the MIR in order to find function calls, closures, and drop-glue.
1216///
1217/// Anything that's found is added to `output`. Furthermore the "mentioned items" of the MIR are returned.
1218#[instrument(skip(tcx), level = "debug")]
1219fn collect_items_of_instance<'tcx>(
1220    tcx: TyCtxt<'tcx>,
1221    instance: Instance<'tcx>,
1222    mode: CollectionMode,
1223) -> (MonoItems<'tcx>, MonoItems<'tcx>) {
1224    // This item is getting monomorphized, do mono-time checks.
1225    tcx.ensure_ok().check_mono_item(instance);
1226
1227    let body = tcx.instance_mir(instance.def);
1228    // Naively, in "used" collection mode, all functions get added to *both* `used_items` and
1229    // `mentioned_items`. Mentioned items processing will then notice that they have already been
1230    // visited, but at that point each mentioned item has been monomorphized, added to the
1231    // `mentioned_items` worklist, and checked in the global set of visited items. To remove that
1232    // overhead, we have a special optimization that avoids adding items to `mentioned_items` when
1233    // they are already added in `used_items`. We could just scan `used_items`, but that's a linear
1234    // scan and not very efficient. Furthermore we can only do that *after* monomorphizing the
1235    // mentioned item. So instead we collect all pre-monomorphized `MentionedItem` that were already
1236    // added to `used_items` in a hash set, which can efficiently query in the
1237    // `body.mentioned_items` loop below without even having to monomorphize the item.
1238    let mut used_items = MonoItems::new();
1239    let mut mentioned_items = MonoItems::new();
1240    let mut used_mentioned_items = Default::default();
1241    let mut collector = MirUsedCollector {
1242        tcx,
1243        body,
1244        used_items: &mut used_items,
1245        used_mentioned_items: &mut used_mentioned_items,
1246        instance,
1247    };
1248
1249    if mode == CollectionMode::UsedItems {
1250        for (bb, data) in traversal::mono_reachable(body, tcx, instance) {
1251            collector.visit_basic_block_data(bb, data)
1252        }
1253    }
1254
1255    // Always visit all `required_consts`, so that we evaluate them and abort compilation if any of
1256    // them errors.
1257    for const_op in body.required_consts() {
1258        if let Some(val) = collector.eval_constant(const_op) {
1259            collect_const_value(tcx, val, &mut mentioned_items);
1260        }
1261    }
1262
1263    // Always gather mentioned items. We try to avoid processing items that we have already added to
1264    // `used_items` above.
1265    for item in body.mentioned_items() {
1266        if !collector.used_mentioned_items.contains(&item.node) {
1267            let item_mono = collector.monomorphize(item.node);
1268            visit_mentioned_item(tcx, &item_mono, item.span, &mut mentioned_items);
1269        }
1270    }
1271
1272    (used_items, mentioned_items)
1273}
1274
1275fn items_of_instance<'tcx>(
1276    tcx: TyCtxt<'tcx>,
1277    (instance, mode): (Instance<'tcx>, CollectionMode),
1278) -> (&'tcx [Spanned<MonoItem<'tcx>>], &'tcx [Spanned<MonoItem<'tcx>>]) {
1279    let (used_items, mentioned_items) = collect_items_of_instance(tcx, instance, mode);
1280
1281    let used_items = tcx.arena.alloc_from_iter(used_items);
1282    let mentioned_items = tcx.arena.alloc_from_iter(mentioned_items);
1283
1284    (used_items, mentioned_items)
1285}
1286
1287/// `item` must be already monomorphized.
1288#[instrument(skip(tcx, span, output), level = "debug")]
1289fn visit_mentioned_item<'tcx>(
1290    tcx: TyCtxt<'tcx>,
1291    item: &MentionedItem<'tcx>,
1292    span: Span,
1293    output: &mut MonoItems<'tcx>,
1294) {
1295    match *item {
1296        MentionedItem::Fn(ty) => {
1297            if let ty::FnDef(def_id, args) = *ty.kind() {
1298                let instance = Instance::expect_resolve(
1299                    tcx,
1300                    ty::TypingEnv::fully_monomorphized(),
1301                    def_id,
1302                    args,
1303                    span,
1304                );
1305                // `visit_instance_use` was written for "used" item collection but works just as well
1306                // for "mentioned" item collection.
1307                // We can set `is_direct_call`; that just means we'll skip a bunch of shims that anyway
1308                // can't have their own failing constants.
1309                visit_instance_use(tcx, instance, /*is_direct_call*/ true, span, output);
1310            }
1311        }
1312        MentionedItem::Drop(ty) => {
1313            visit_drop_use(tcx, ty, /*is_direct_call*/ true, span, output);
1314        }
1315        MentionedItem::UnsizeCast { source_ty, target_ty } => {
1316            let (source_ty, target_ty) =
1317                find_vtable_types_for_unsizing(tcx.at(span), source_ty, target_ty);
1318            // This could also be a different Unsize instruction, like
1319            // from a fixed sized array to a slice. But we are only
1320            // interested in things that produce a vtable.
1321            if (target_ty.is_trait() && !source_ty.is_trait())
1322                || (target_ty.is_dyn_star() && !source_ty.is_dyn_star())
1323            {
1324                create_mono_items_for_vtable_methods(tcx, target_ty, source_ty, span, output);
1325            }
1326        }
1327        MentionedItem::Closure(source_ty) => {
1328            if let ty::Closure(def_id, args) = *source_ty.kind() {
1329                let instance =
1330                    Instance::resolve_closure(tcx, def_id, args, ty::ClosureKind::FnOnce);
1331                if tcx.should_codegen_locally(instance) {
1332                    output.push(create_fn_mono_item(tcx, instance, span));
1333                }
1334            } else {
1335                bug!()
1336            }
1337        }
1338    }
1339}
1340
1341#[instrument(skip(tcx, output), level = "debug")]
1342fn collect_const_value<'tcx>(
1343    tcx: TyCtxt<'tcx>,
1344    value: mir::ConstValue<'tcx>,
1345    output: &mut MonoItems<'tcx>,
1346) {
1347    match value {
1348        mir::ConstValue::Scalar(Scalar::Ptr(ptr, _size)) => {
1349            collect_alloc(tcx, ptr.provenance.alloc_id(), output)
1350        }
1351        mir::ConstValue::Indirect { alloc_id, .. } => collect_alloc(tcx, alloc_id, output),
1352        mir::ConstValue::Slice { data, meta: _ } => {
1353            for &prov in data.inner().provenance().ptrs().values() {
1354                collect_alloc(tcx, prov.alloc_id(), output);
1355            }
1356        }
1357        _ => {}
1358    }
1359}
1360
1361//=-----------------------------------------------------------------------------
1362// Root Collection
1363//=-----------------------------------------------------------------------------
1364
1365// Find all non-generic items by walking the HIR. These items serve as roots to
1366// start monomorphizing from.
1367#[instrument(skip(tcx, mode), level = "debug")]
1368fn collect_roots(tcx: TyCtxt<'_>, mode: MonoItemCollectionStrategy) -> Vec<MonoItem<'_>> {
1369    debug!("collecting roots");
1370    let mut roots = MonoItems::new();
1371
1372    {
1373        let entry_fn = tcx.entry_fn(());
1374
1375        debug!("collect_roots: entry_fn = {:?}", entry_fn);
1376
1377        let mut collector = RootCollector { tcx, strategy: mode, entry_fn, output: &mut roots };
1378
1379        let crate_items = tcx.hir_crate_items(());
1380
1381        for id in crate_items.free_items() {
1382            collector.process_item(id);
1383        }
1384
1385        for id in crate_items.impl_items() {
1386            collector.process_impl_item(id);
1387        }
1388
1389        for id in crate_items.nested_bodies() {
1390            collector.process_nested_body(id);
1391        }
1392
1393        collector.push_extra_entry_roots();
1394    }
1395
1396    // We can only codegen items that are instantiable - items all of
1397    // whose predicates hold. Luckily, items that aren't instantiable
1398    // can't actually be used, so we can just skip codegenning them.
1399    roots
1400        .into_iter()
1401        .filter_map(|Spanned { node: mono_item, .. }| {
1402            mono_item.is_instantiable(tcx).then_some(mono_item)
1403        })
1404        .collect()
1405}
1406
1407struct RootCollector<'a, 'tcx> {
1408    tcx: TyCtxt<'tcx>,
1409    strategy: MonoItemCollectionStrategy,
1410    output: &'a mut MonoItems<'tcx>,
1411    entry_fn: Option<(DefId, EntryFnType)>,
1412}
1413
1414impl<'v> RootCollector<'_, 'v> {
1415    fn process_item(&mut self, id: hir::ItemId) {
1416        match self.tcx.def_kind(id.owner_id) {
1417            DefKind::Enum | DefKind::Struct | DefKind::Union => {
1418                if self.strategy == MonoItemCollectionStrategy::Eager
1419                    && !self.tcx.generics_of(id.owner_id).requires_monomorphization(self.tcx)
1420                {
1421                    debug!("RootCollector: ADT drop-glue for `{id:?}`",);
1422                    let id_args =
1423                        ty::GenericArgs::for_item(self.tcx, id.owner_id.to_def_id(), |param, _| {
1424                            match param.kind {
1425                                GenericParamDefKind::Lifetime => {
1426                                    self.tcx.lifetimes.re_erased.into()
1427                                }
1428                                GenericParamDefKind::Type { .. }
1429                                | GenericParamDefKind::Const { .. } => {
1430                                    unreachable!(
1431                                        "`own_requires_monomorphization` check means that \
1432                                we should have no type/const params"
1433                                    )
1434                                }
1435                            }
1436                        });
1437
1438                    // This type is impossible to instantiate, so we should not try to
1439                    // generate a `drop_in_place` instance for it.
1440                    if self.tcx.instantiate_and_check_impossible_predicates((
1441                        id.owner_id.to_def_id(),
1442                        id_args,
1443                    )) {
1444                        return;
1445                    }
1446
1447                    let ty =
1448                        self.tcx.type_of(id.owner_id.to_def_id()).instantiate(self.tcx, id_args);
1449                    assert!(!ty.has_non_region_param());
1450                    visit_drop_use(self.tcx, ty, true, DUMMY_SP, self.output);
1451                }
1452            }
1453            DefKind::GlobalAsm => {
1454                debug!(
1455                    "RootCollector: ItemKind::GlobalAsm({})",
1456                    self.tcx.def_path_str(id.owner_id)
1457                );
1458                self.output.push(dummy_spanned(MonoItem::GlobalAsm(id)));
1459            }
1460            DefKind::Static { .. } => {
1461                let def_id = id.owner_id.to_def_id();
1462                debug!("RootCollector: ItemKind::Static({})", self.tcx.def_path_str(def_id));
1463                self.output.push(dummy_spanned(MonoItem::Static(def_id)));
1464            }
1465            DefKind::Const => {
1466                // Const items only generate mono items if they are actually used somewhere.
1467                // Just declaring them is insufficient.
1468
1469                // But even just declaring them must collect the items they refer to
1470                // unless their generics require monomorphization.
1471                if !self.tcx.generics_of(id.owner_id).requires_monomorphization(self.tcx)
1472                    && let Ok(val) = self.tcx.const_eval_poly(id.owner_id.to_def_id())
1473                {
1474                    collect_const_value(self.tcx, val, self.output);
1475                }
1476            }
1477            DefKind::Impl { .. } => {
1478                if self.strategy == MonoItemCollectionStrategy::Eager {
1479                    create_mono_items_for_default_impls(self.tcx, id, self.output);
1480                }
1481            }
1482            DefKind::Fn => {
1483                self.push_if_root(id.owner_id.def_id);
1484            }
1485            _ => {}
1486        }
1487    }
1488
1489    fn process_impl_item(&mut self, id: hir::ImplItemId) {
1490        if matches!(self.tcx.def_kind(id.owner_id), DefKind::AssocFn) {
1491            self.push_if_root(id.owner_id.def_id);
1492        }
1493    }
1494
1495    fn process_nested_body(&mut self, def_id: LocalDefId) {
1496        match self.tcx.def_kind(def_id) {
1497            DefKind::Closure => {
1498                if self.strategy == MonoItemCollectionStrategy::Eager
1499                    && !self
1500                        .tcx
1501                        .generics_of(self.tcx.typeck_root_def_id(def_id.to_def_id()))
1502                        .requires_monomorphization(self.tcx)
1503                {
1504                    let instance = match *self.tcx.type_of(def_id).instantiate_identity().kind() {
1505                        ty::Closure(def_id, args)
1506                        | ty::Coroutine(def_id, args)
1507                        | ty::CoroutineClosure(def_id, args) => {
1508                            Instance::new(def_id, self.tcx.erase_regions(args))
1509                        }
1510                        _ => unreachable!(),
1511                    };
1512                    let Ok(instance) = self.tcx.try_normalize_erasing_regions(
1513                        ty::TypingEnv::fully_monomorphized(),
1514                        instance,
1515                    ) else {
1516                        // Don't ICE on an impossible-to-normalize closure.
1517                        return;
1518                    };
1519                    let mono_item = create_fn_mono_item(self.tcx, instance, DUMMY_SP);
1520                    if mono_item.node.is_instantiable(self.tcx) {
1521                        self.output.push(mono_item);
1522                    }
1523                }
1524            }
1525            _ => {}
1526        }
1527    }
1528
1529    fn is_root(&self, def_id: LocalDefId) -> bool {
1530        !self.tcx.generics_of(def_id).requires_monomorphization(self.tcx)
1531            && match self.strategy {
1532                MonoItemCollectionStrategy::Eager => {
1533                    !matches!(self.tcx.codegen_fn_attrs(def_id).inline, InlineAttr::Force { .. })
1534                }
1535                MonoItemCollectionStrategy::Lazy => {
1536                    self.entry_fn.and_then(|(id, _)| id.as_local()) == Some(def_id)
1537                        || self.tcx.is_reachable_non_generic(def_id)
1538                        || self
1539                            .tcx
1540                            .codegen_fn_attrs(def_id)
1541                            .flags
1542                            .contains(CodegenFnAttrFlags::RUSTC_STD_INTERNAL_SYMBOL)
1543                }
1544            }
1545    }
1546
1547    /// If `def_id` represents a root, pushes it onto the list of
1548    /// outputs. (Note that all roots must be monomorphic.)
1549    #[instrument(skip(self), level = "debug")]
1550    fn push_if_root(&mut self, def_id: LocalDefId) {
1551        if self.is_root(def_id) {
1552            debug!("found root");
1553
1554            let instance = Instance::mono(self.tcx, def_id.to_def_id());
1555            self.output.push(create_fn_mono_item(self.tcx, instance, DUMMY_SP));
1556        }
1557    }
1558
1559    /// As a special case, when/if we encounter the
1560    /// `main()` function, we also have to generate a
1561    /// monomorphized copy of the start lang item based on
1562    /// the return type of `main`. This is not needed when
1563    /// the user writes their own `start` manually.
1564    fn push_extra_entry_roots(&mut self) {
1565        let Some((main_def_id, EntryFnType::Main { .. })) = self.entry_fn else {
1566            return;
1567        };
1568
1569        let Some(start_def_id) = self.tcx.lang_items().start_fn() else {
1570            self.tcx.dcx().emit_fatal(errors::StartNotFound);
1571        };
1572        let main_ret_ty = self.tcx.fn_sig(main_def_id).no_bound_vars().unwrap().output();
1573
1574        // Given that `main()` has no arguments,
1575        // then its return type cannot have
1576        // late-bound regions, since late-bound
1577        // regions must appear in the argument
1578        // listing.
1579        let main_ret_ty = self.tcx.normalize_erasing_regions(
1580            ty::TypingEnv::fully_monomorphized(),
1581            main_ret_ty.no_bound_vars().unwrap(),
1582        );
1583
1584        let start_instance = Instance::expect_resolve(
1585            self.tcx,
1586            ty::TypingEnv::fully_monomorphized(),
1587            start_def_id,
1588            self.tcx.mk_args(&[main_ret_ty.into()]),
1589            DUMMY_SP,
1590        );
1591
1592        self.output.push(create_fn_mono_item(self.tcx, start_instance, DUMMY_SP));
1593    }
1594}
1595
1596#[instrument(level = "debug", skip(tcx, output))]
1597fn create_mono_items_for_default_impls<'tcx>(
1598    tcx: TyCtxt<'tcx>,
1599    item: hir::ItemId,
1600    output: &mut MonoItems<'tcx>,
1601) {
1602    let Some(impl_) = tcx.impl_trait_header(item.owner_id) else {
1603        return;
1604    };
1605
1606    if matches!(impl_.polarity, ty::ImplPolarity::Negative) {
1607        return;
1608    }
1609
1610    if tcx.generics_of(item.owner_id).own_requires_monomorphization() {
1611        return;
1612    }
1613
1614    // Lifetimes never affect trait selection, so we are allowed to eagerly
1615    // instantiate an instance of an impl method if the impl (and method,
1616    // which we check below) is only parameterized over lifetime. In that case,
1617    // we use the ReErased, which has no lifetime information associated with
1618    // it, to validate whether or not the impl is legal to instantiate at all.
1619    let only_region_params = |param: &ty::GenericParamDef, _: &_| match param.kind {
1620        GenericParamDefKind::Lifetime => tcx.lifetimes.re_erased.into(),
1621        GenericParamDefKind::Type { .. } | GenericParamDefKind::Const { .. } => {
1622            unreachable!(
1623                "`own_requires_monomorphization` check means that \
1624                we should have no type/const params"
1625            )
1626        }
1627    };
1628    let impl_args = GenericArgs::for_item(tcx, item.owner_id.to_def_id(), only_region_params);
1629    let trait_ref = impl_.trait_ref.instantiate(tcx, impl_args);
1630
1631    // Unlike 'lazy' monomorphization that begins by collecting items transitively
1632    // called by `main` or other global items, when eagerly monomorphizing impl
1633    // items, we never actually check that the predicates of this impl are satisfied
1634    // in a empty param env (i.e. with no assumptions).
1635    //
1636    // Even though this impl has no type or const generic parameters, because we don't
1637    // consider higher-ranked predicates such as `for<'a> &'a mut [u8]: Copy` to
1638    // be trivially false. We must now check that the impl has no impossible-to-satisfy
1639    // predicates.
1640    if tcx.instantiate_and_check_impossible_predicates((item.owner_id.to_def_id(), impl_args)) {
1641        return;
1642    }
1643
1644    let typing_env = ty::TypingEnv::fully_monomorphized();
1645    let trait_ref = tcx.normalize_erasing_regions(typing_env, trait_ref);
1646    let overridden_methods = tcx.impl_item_implementor_ids(item.owner_id);
1647    for method in tcx.provided_trait_methods(trait_ref.def_id) {
1648        if overridden_methods.contains_key(&method.def_id) {
1649            continue;
1650        }
1651
1652        if tcx.generics_of(method.def_id).own_requires_monomorphization() {
1653            continue;
1654        }
1655
1656        // As mentioned above, the method is legal to eagerly instantiate if it
1657        // only has lifetime generic parameters. This is validated by calling
1658        // `own_requires_monomorphization` on both the impl and method.
1659        let args = trait_ref.args.extend_to(tcx, method.def_id, only_region_params);
1660        let instance = ty::Instance::expect_resolve(tcx, typing_env, method.def_id, args, DUMMY_SP);
1661
1662        let mono_item = create_fn_mono_item(tcx, instance, DUMMY_SP);
1663        if mono_item.node.is_instantiable(tcx) && tcx.should_codegen_locally(instance) {
1664            output.push(mono_item);
1665        }
1666    }
1667}
1668
1669//=-----------------------------------------------------------------------------
1670// Top-level entry point, tying it all together
1671//=-----------------------------------------------------------------------------
1672
1673#[instrument(skip(tcx, strategy), level = "debug")]
1674pub(crate) fn collect_crate_mono_items<'tcx>(
1675    tcx: TyCtxt<'tcx>,
1676    strategy: MonoItemCollectionStrategy,
1677) -> (Vec<MonoItem<'tcx>>, UsageMap<'tcx>) {
1678    let _prof_timer = tcx.prof.generic_activity("monomorphization_collector");
1679
1680    let roots = tcx
1681        .sess
1682        .time("monomorphization_collector_root_collections", || collect_roots(tcx, strategy));
1683
1684    debug!("building mono item graph, beginning at roots");
1685
1686    let mut state = SharedState {
1687        visited: MTLock::new(UnordSet::default()),
1688        mentioned: MTLock::new(UnordSet::default()),
1689        usage_map: MTLock::new(UsageMap::new()),
1690    };
1691    let recursion_limit = tcx.recursion_limit();
1692
1693    {
1694        let state: LRef<'_, _> = &mut state;
1695
1696        tcx.sess.time("monomorphization_collector_graph_walk", || {
1697            par_for_each_in(roots, |root| {
1698                let mut recursion_depths = DefIdMap::default();
1699                collect_items_rec(
1700                    tcx,
1701                    dummy_spanned(root),
1702                    state,
1703                    &mut recursion_depths,
1704                    recursion_limit,
1705                    CollectionMode::UsedItems,
1706                );
1707            });
1708        });
1709    }
1710
1711    // The set of MonoItems was created in an inherently indeterministic order because
1712    // of parallelism. We sort it here to ensure that the output is deterministic.
1713    let mono_items = tcx.with_stable_hashing_context(move |ref hcx| {
1714        state.visited.into_inner().into_sorted(hcx, true)
1715    });
1716
1717    (mono_items, state.usage_map.into_inner())
1718}
1719
1720pub(crate) fn provide(providers: &mut Providers) {
1721    providers.hooks.should_codegen_locally = should_codegen_locally;
1722    providers.items_of_instance = items_of_instance;
1723}