rustc_monomorphize/
partitioning.rs

1//! Partitioning Codegen Units for Incremental Compilation
2//! ======================================================
3//!
4//! The task of this module is to take the complete set of monomorphizations of
5//! a crate and produce a set of codegen units from it, where a codegen unit
6//! is a named set of (mono-item, linkage) pairs. That is, this module
7//! decides which monomorphization appears in which codegen units with which
8//! linkage. The following paragraphs describe some of the background on the
9//! partitioning scheme.
10//!
11//! The most important opportunity for saving on compilation time with
12//! incremental compilation is to avoid re-codegenning and re-optimizing code.
13//! Since the unit of codegen and optimization for LLVM is "modules" or, how
14//! we call them "codegen units", the particulars of how much time can be saved
15//! by incremental compilation are tightly linked to how the output program is
16//! partitioned into these codegen units prior to passing it to LLVM --
17//! especially because we have to treat codegen units as opaque entities once
18//! they are created: There is no way for us to incrementally update an existing
19//! LLVM module and so we have to build any such module from scratch if it was
20//! affected by some change in the source code.
21//!
22//! From that point of view it would make sense to maximize the number of
23//! codegen units by, for example, putting each function into its own module.
24//! That way only those modules would have to be re-compiled that were actually
25//! affected by some change, minimizing the number of functions that could have
26//! been re-used but just happened to be located in a module that is
27//! re-compiled.
28//!
29//! However, since LLVM optimization does not work across module boundaries,
30//! using such a highly granular partitioning would lead to very slow runtime
31//! code since it would effectively prohibit inlining and other inter-procedure
32//! optimizations. We want to avoid that as much as possible.
33//!
34//! Thus we end up with a trade-off: The bigger the codegen units, the better
35//! LLVM's optimizer can do its work, but also the smaller the compilation time
36//! reduction we get from incremental compilation.
37//!
38//! Ideally, we would create a partitioning such that there are few big codegen
39//! units with few interdependencies between them. For now though, we use the
40//! following heuristic to determine the partitioning:
41//!
42//! - There are two codegen units for every source-level module:
43//! - One for "stable", that is non-generic, code
44//! - One for more "volatile" code, i.e., monomorphized instances of functions
45//!   defined in that module
46//!
47//! In order to see why this heuristic makes sense, let's take a look at when a
48//! codegen unit can get invalidated:
49//!
50//! 1. The most straightforward case is when the BODY of a function or global
51//! changes. Then any codegen unit containing the code for that item has to be
52//! re-compiled. Note that this includes all codegen units where the function
53//! has been inlined.
54//!
55//! 2. The next case is when the SIGNATURE of a function or global changes. In
56//! this case, all codegen units containing a REFERENCE to that item have to be
57//! re-compiled. This is a superset of case 1.
58//!
59//! 3. The final and most subtle case is when a REFERENCE to a generic function
60//! is added or removed somewhere. Even though the definition of the function
61//! might be unchanged, a new REFERENCE might introduce a new monomorphized
62//! instance of this function which has to be placed and compiled somewhere.
63//! Conversely, when removing a REFERENCE, it might have been the last one with
64//! that particular set of generic arguments and thus we have to remove it.
65//!
66//! From the above we see that just using one codegen unit per source-level
67//! module is not such a good idea, since just adding a REFERENCE to some
68//! generic item somewhere else would invalidate everything within the module
69//! containing the generic item. The heuristic above reduces this detrimental
70//! side-effect of references a little by at least not touching the non-generic
71//! code of the module.
72//!
73//! A Note on Inlining
74//! ------------------
75//! As briefly mentioned above, in order for LLVM to be able to inline a
76//! function call, the body of the function has to be available in the LLVM
77//! module where the call is made. This has a few consequences for partitioning:
78//!
79//! - The partitioning algorithm has to take care of placing functions into all
80//!   codegen units where they should be available for inlining. It also has to
81//!   decide on the correct linkage for these functions.
82//!
83//! - The partitioning algorithm has to know which functions are likely to get
84//!   inlined, so it can distribute function instantiations accordingly. Since
85//!   there is no way of knowing for sure which functions LLVM will decide to
86//!   inline in the end, we apply a heuristic here: Only functions marked with
87//!   `#[inline]` are considered for inlining by the partitioner. The current
88//!   implementation will not try to determine if a function is likely to be
89//!   inlined by looking at the functions definition.
90//!
91//! Note though that as a side-effect of creating a codegen units per
92//! source-level module, functions from the same module will be available for
93//! inlining, even when they are not marked `#[inline]`.
94
95mod autodiff;
96
97use std::cmp;
98use std::collections::hash_map::Entry;
99use std::fs::{self, File};
100use std::io::Write;
101use std::path::{Path, PathBuf};
102
103use rustc_attr_data_structures::InlineAttr;
104use rustc_data_structures::fx::{FxIndexMap, FxIndexSet};
105use rustc_data_structures::sync;
106use rustc_data_structures::unord::{UnordMap, UnordSet};
107use rustc_hir::LangItem;
108use rustc_hir::def::DefKind;
109use rustc_hir::def_id::{DefId, DefIdSet, LOCAL_CRATE};
110use rustc_hir::definitions::DefPathDataName;
111use rustc_middle::bug;
112use rustc_middle::middle::codegen_fn_attrs::CodegenFnAttrFlags;
113use rustc_middle::middle::exported_symbols::{SymbolExportInfo, SymbolExportLevel};
114use rustc_middle::mir::mono::{
115    CodegenUnit, CodegenUnitNameBuilder, InstantiationMode, Linkage, MonoItem, MonoItemData,
116    MonoItemPartitions, Visibility,
117};
118use rustc_middle::ty::print::{characteristic_def_id_of_type, with_no_trimmed_paths};
119use rustc_middle::ty::{self, InstanceKind, TyCtxt};
120use rustc_middle::util::Providers;
121use rustc_session::CodegenUnits;
122use rustc_session::config::{DumpMonoStatsFormat, SwitchWithOptPath};
123use rustc_span::Symbol;
124use rustc_target::spec::SymbolVisibility;
125use tracing::debug;
126
127use crate::collector::{self, MonoItemCollectionStrategy, UsageMap};
128use crate::errors::{CouldntDumpMonoStats, SymbolAlreadyDefined};
129
130struct PartitioningCx<'a, 'tcx> {
131    tcx: TyCtxt<'tcx>,
132    usage_map: &'a UsageMap<'tcx>,
133}
134
135struct PlacedMonoItems<'tcx> {
136    /// The codegen units, sorted by name to make things deterministic.
137    codegen_units: Vec<CodegenUnit<'tcx>>,
138
139    internalization_candidates: UnordSet<MonoItem<'tcx>>,
140}
141
142// The output CGUs are sorted by name.
143fn partition<'tcx, I>(
144    tcx: TyCtxt<'tcx>,
145    mono_items: I,
146    usage_map: &UsageMap<'tcx>,
147) -> Vec<CodegenUnit<'tcx>>
148where
149    I: Iterator<Item = MonoItem<'tcx>>,
150{
151    let _prof_timer = tcx.prof.generic_activity("cgu_partitioning");
152
153    let cx = &PartitioningCx { tcx, usage_map };
154
155    // Place all mono items into a codegen unit. `place_mono_items` is
156    // responsible for initializing the CGU size estimates.
157    let PlacedMonoItems { mut codegen_units, internalization_candidates } = {
158        let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_place_items");
159        let placed = place_mono_items(cx, mono_items);
160
161        debug_dump(tcx, "PLACE", &placed.codegen_units);
162
163        placed
164    };
165
166    // Merge until we don't exceed the max CGU count.
167    // `merge_codegen_units` is responsible for updating the CGU size
168    // estimates.
169    {
170        let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_merge_cgus");
171        merge_codegen_units(cx, &mut codegen_units);
172        debug_dump(tcx, "MERGE", &codegen_units);
173    }
174
175    // Make as many symbols "internal" as possible, so LLVM has more freedom to
176    // optimize.
177    if !tcx.sess.link_dead_code() {
178        let _prof_timer = tcx.prof.generic_activity("cgu_partitioning_internalize_symbols");
179        internalize_symbols(cx, &mut codegen_units, internalization_candidates);
180
181        debug_dump(tcx, "INTERNALIZE", &codegen_units);
182    }
183
184    // Mark one CGU for dead code, if necessary.
185    if tcx.sess.instrument_coverage() {
186        mark_code_coverage_dead_code_cgu(&mut codegen_units);
187    }
188
189    // Ensure CGUs are sorted by name, so that we get deterministic results.
190    if !codegen_units.is_sorted_by(|a, b| a.name().as_str() <= b.name().as_str()) {
191        let mut names = String::new();
192        for cgu in codegen_units.iter() {
193            names += &format!("- {}\n", cgu.name());
194        }
195        bug!("unsorted CGUs:\n{names}");
196    }
197
198    codegen_units
199}
200
201fn place_mono_items<'tcx, I>(cx: &PartitioningCx<'_, 'tcx>, mono_items: I) -> PlacedMonoItems<'tcx>
202where
203    I: Iterator<Item = MonoItem<'tcx>>,
204{
205    let mut codegen_units = UnordMap::default();
206    let is_incremental_build = cx.tcx.sess.opts.incremental.is_some();
207    let mut internalization_candidates = UnordSet::default();
208
209    // Determine if monomorphizations instantiated in this crate will be made
210    // available to downstream crates. This depends on whether we are in
211    // share-generics mode and whether the current crate can even have
212    // downstream crates.
213    let can_export_generics = cx.tcx.local_crate_exports_generics();
214    let always_export_generics = can_export_generics && cx.tcx.sess.opts.share_generics();
215
216    let cgu_name_builder = &mut CodegenUnitNameBuilder::new(cx.tcx);
217    let cgu_name_cache = &mut UnordMap::default();
218
219    for mono_item in mono_items {
220        // Handle only root (GloballyShared) items directly here. Inlined (LocalCopy) items
221        // are handled at the bottom of the loop based on reachability, with one exception.
222        // The #[lang = "start"] item is the program entrypoint, so there are no calls to it in MIR.
223        // So even if its mode is LocalCopy, we need to treat it like a root.
224        match mono_item.instantiation_mode(cx.tcx) {
225            InstantiationMode::GloballyShared { .. } => {}
226            InstantiationMode::LocalCopy => {
227                if !cx.tcx.is_lang_item(mono_item.def_id(), LangItem::Start) {
228                    continue;
229                }
230            }
231        }
232
233        let characteristic_def_id = characteristic_def_id_of_mono_item(cx.tcx, mono_item);
234        let is_volatile = is_incremental_build && mono_item.is_generic_fn();
235
236        let cgu_name = match characteristic_def_id {
237            Some(def_id) => compute_codegen_unit_name(
238                cx.tcx,
239                cgu_name_builder,
240                def_id,
241                is_volatile,
242                cgu_name_cache,
243            ),
244            None => fallback_cgu_name(cgu_name_builder),
245        };
246
247        let cgu = codegen_units.entry(cgu_name).or_insert_with(|| CodegenUnit::new(cgu_name));
248
249        let mut can_be_internalized = true;
250        let (linkage, visibility) = mono_item_linkage_and_visibility(
251            cx.tcx,
252            &mono_item,
253            &mut can_be_internalized,
254            can_export_generics,
255            always_export_generics,
256        );
257
258        // We can't differentiate a function that got inlined.
259        let autodiff_active = cfg!(llvm_enzyme)
260            && matches!(mono_item, MonoItem::Fn(_))
261            && cx
262                .tcx
263                .codegen_fn_attrs(mono_item.def_id())
264                .autodiff_item
265                .as_ref()
266                .is_some_and(|ad| ad.is_active());
267
268        if !autodiff_active && visibility == Visibility::Hidden && can_be_internalized {
269            internalization_candidates.insert(mono_item);
270        }
271        let size_estimate = mono_item.size_estimate(cx.tcx);
272
273        cgu.items_mut()
274            .insert(mono_item, MonoItemData { inlined: false, linkage, visibility, size_estimate });
275
276        // Get all inlined items that are reachable from `mono_item` without
277        // going via another root item. This includes drop-glue, functions from
278        // external crates, and local functions the definition of which is
279        // marked with `#[inline]`.
280        let mut reachable_inlined_items = FxIndexSet::default();
281        get_reachable_inlined_items(cx.tcx, mono_item, cx.usage_map, &mut reachable_inlined_items);
282
283        // Add those inlined items. It's possible an inlined item is reachable
284        // from multiple root items within a CGU, which is fine, it just means
285        // the `insert` will be a no-op.
286        for inlined_item in reachable_inlined_items {
287            // This is a CGU-private copy.
288            cgu.items_mut().entry(inlined_item).or_insert_with(|| MonoItemData {
289                inlined: true,
290                linkage: Linkage::Internal,
291                visibility: Visibility::Default,
292                size_estimate: inlined_item.size_estimate(cx.tcx),
293            });
294        }
295    }
296
297    // Always ensure we have at least one CGU; otherwise, if we have a
298    // crate with just types (for example), we could wind up with no CGU.
299    if codegen_units.is_empty() {
300        let cgu_name = fallback_cgu_name(cgu_name_builder);
301        codegen_units.insert(cgu_name, CodegenUnit::new(cgu_name));
302    }
303
304    let mut codegen_units: Vec<_> = cx.tcx.with_stable_hashing_context(|ref hcx| {
305        codegen_units.into_items().map(|(_, cgu)| cgu).collect_sorted(hcx, true)
306    });
307
308    for cgu in codegen_units.iter_mut() {
309        cgu.compute_size_estimate();
310    }
311
312    return PlacedMonoItems { codegen_units, internalization_candidates };
313
314    fn get_reachable_inlined_items<'tcx>(
315        tcx: TyCtxt<'tcx>,
316        item: MonoItem<'tcx>,
317        usage_map: &UsageMap<'tcx>,
318        visited: &mut FxIndexSet<MonoItem<'tcx>>,
319    ) {
320        usage_map.for_each_inlined_used_item(tcx, item, |inlined_item| {
321            let is_new = visited.insert(inlined_item);
322            if is_new {
323                get_reachable_inlined_items(tcx, inlined_item, usage_map, visited);
324            }
325        });
326    }
327}
328
329// This function requires the CGUs to be sorted by name on input, and ensures
330// they are sorted by name on return, for deterministic behaviour.
331fn merge_codegen_units<'tcx>(
332    cx: &PartitioningCx<'_, 'tcx>,
333    codegen_units: &mut Vec<CodegenUnit<'tcx>>,
334) {
335    assert!(cx.tcx.sess.codegen_units().as_usize() >= 1);
336
337    // A sorted order here ensures merging is deterministic.
338    assert!(codegen_units.is_sorted_by(|a, b| a.name().as_str() <= b.name().as_str()));
339
340    // This map keeps track of what got merged into what.
341    let mut cgu_contents: UnordMap<Symbol, Vec<Symbol>> =
342        codegen_units.iter().map(|cgu| (cgu.name(), vec![cgu.name()])).collect();
343
344    // If N is the maximum number of CGUs, and the CGUs are sorted from largest
345    // to smallest, we repeatedly find which CGU in codegen_units[N..] has the
346    // greatest overlap of inlined items with codegen_units[N-1], merge that
347    // CGU into codegen_units[N-1], then re-sort by size and repeat.
348    //
349    // We use inlined item overlap to guide this merging because it minimizes
350    // duplication of inlined items, which makes LLVM be faster and generate
351    // better and smaller machine code.
352    //
353    // Why merge into codegen_units[N-1]? We want CGUs to have similar sizes,
354    // which means we don't want codegen_units[0..N] (the already big ones)
355    // getting any bigger, if we can avoid it. When we have more than N CGUs
356    // then at least one of the biggest N will have to grow. codegen_units[N-1]
357    // is the smallest of those, and so has the most room to grow.
358    let max_codegen_units = cx.tcx.sess.codegen_units().as_usize();
359    while codegen_units.len() > max_codegen_units {
360        // Sort small CGUs to the back.
361        codegen_units.sort_by_key(|cgu| cmp::Reverse(cgu.size_estimate()));
362
363        let cgu_dst = &codegen_units[max_codegen_units - 1];
364
365        // Find the CGU that overlaps the most with `cgu_dst`. In the case of a
366        // tie, favour the earlier (bigger) CGU.
367        let mut max_overlap = 0;
368        let mut max_overlap_i = max_codegen_units;
369        for (i, cgu_src) in codegen_units.iter().enumerate().skip(max_codegen_units) {
370            if cgu_src.size_estimate() <= max_overlap {
371                // None of the remaining overlaps can exceed `max_overlap`, so
372                // stop looking.
373                break;
374            }
375
376            let overlap = compute_inlined_overlap(cgu_dst, cgu_src);
377            if overlap > max_overlap {
378                max_overlap = overlap;
379                max_overlap_i = i;
380            }
381        }
382
383        let mut cgu_src = codegen_units.swap_remove(max_overlap_i);
384        let cgu_dst = &mut codegen_units[max_codegen_units - 1];
385
386        // Move the items from `cgu_src` to `cgu_dst`. Some of them may be
387        // duplicate inlined items, in which case the destination CGU is
388        // unaffected. Recalculate size estimates afterwards.
389        cgu_dst.items_mut().append(cgu_src.items_mut());
390        cgu_dst.compute_size_estimate();
391
392        // Record that `cgu_dst` now contains all the stuff that was in
393        // `cgu_src` before.
394        let mut consumed_cgu_names = cgu_contents.remove(&cgu_src.name()).unwrap();
395        cgu_contents.get_mut(&cgu_dst.name()).unwrap().append(&mut consumed_cgu_names);
396    }
397
398    // Having multiple CGUs can drastically speed up compilation. But for
399    // non-incremental builds, tiny CGUs slow down compilation *and* result in
400    // worse generated code. So we don't allow CGUs smaller than this (unless
401    // there is just one CGU, of course). Note that CGU sizes of 100,000+ are
402    // common in larger programs, so this isn't all that large.
403    const NON_INCR_MIN_CGU_SIZE: usize = 1800;
404
405    // Repeatedly merge the two smallest codegen units as long as: it's a
406    // non-incremental build, and the user didn't specify a CGU count, and
407    // there are multiple CGUs, and some are below the minimum size.
408    //
409    // The "didn't specify a CGU count" condition is because when an explicit
410    // count is requested we observe it as closely as possible. For example,
411    // the `compiler_builtins` crate sets `codegen-units = 10000` and it's
412    // critical they aren't merged. Also, some tests use explicit small values
413    // and likewise won't work if small CGUs are merged.
414    while cx.tcx.sess.opts.incremental.is_none()
415        && matches!(cx.tcx.sess.codegen_units(), CodegenUnits::Default(_))
416        && codegen_units.len() > 1
417        && codegen_units.iter().any(|cgu| cgu.size_estimate() < NON_INCR_MIN_CGU_SIZE)
418    {
419        // Sort small cgus to the back.
420        codegen_units.sort_by_key(|cgu| cmp::Reverse(cgu.size_estimate()));
421
422        let mut smallest = codegen_units.pop().unwrap();
423        let second_smallest = codegen_units.last_mut().unwrap();
424
425        // Move the items from `smallest` to `second_smallest`. Some of them
426        // may be duplicate inlined items, in which case the destination CGU is
427        // unaffected. Recalculate size estimates afterwards.
428        second_smallest.items_mut().append(smallest.items_mut());
429        second_smallest.compute_size_estimate();
430
431        // Don't update `cgu_contents`, that's only for incremental builds.
432    }
433
434    let cgu_name_builder = &mut CodegenUnitNameBuilder::new(cx.tcx);
435
436    // Rename the newly merged CGUs.
437    if cx.tcx.sess.opts.incremental.is_some() {
438        // If we are doing incremental compilation, we want CGU names to
439        // reflect the path of the source level module they correspond to.
440        // For CGUs that contain the code of multiple modules because of the
441        // merging done above, we use a concatenation of the names of all
442        // contained CGUs.
443        let new_cgu_names = UnordMap::from(
444            cgu_contents
445                .items()
446                // This `filter` makes sure we only update the name of CGUs that
447                // were actually modified by merging.
448                .filter(|(_, cgu_contents)| cgu_contents.len() > 1)
449                .map(|(current_cgu_name, cgu_contents)| {
450                    let mut cgu_contents: Vec<&str> =
451                        cgu_contents.iter().map(|s| s.as_str()).collect();
452
453                    // Sort the names, so things are deterministic and easy to
454                    // predict. We are sorting primitive `&str`s here so we can
455                    // use unstable sort.
456                    cgu_contents.sort_unstable();
457
458                    (*current_cgu_name, cgu_contents.join("--"))
459                }),
460        );
461
462        for cgu in codegen_units.iter_mut() {
463            if let Some(new_cgu_name) = new_cgu_names.get(&cgu.name()) {
464                let new_cgu_name = if cx.tcx.sess.opts.unstable_opts.human_readable_cgu_names {
465                    Symbol::intern(&CodegenUnit::shorten_name(new_cgu_name))
466                } else {
467                    // If we don't require CGU names to be human-readable,
468                    // we use a fixed length hash of the composite CGU name
469                    // instead.
470                    Symbol::intern(&CodegenUnit::mangle_name(new_cgu_name))
471                };
472                cgu.set_name(new_cgu_name);
473            }
474        }
475
476        // A sorted order here ensures what follows can be deterministic.
477        codegen_units.sort_by(|a, b| a.name().as_str().cmp(b.name().as_str()));
478    } else {
479        // When compiling non-incrementally, we rename the CGUS so they have
480        // identical names except for the numeric suffix, something like
481        // `regex.f10ba03eb5ec7975-cgu.N`, where `N` varies.
482        //
483        // It is useful for debugging and profiling purposes if the resulting
484        // CGUs are sorted by name *and* reverse sorted by size. (CGU 0 is the
485        // biggest, CGU 1 is the second biggest, etc.)
486        //
487        // So first we reverse sort by size. Then we generate the names with
488        // zero-padded suffixes, which means they are automatically sorted by
489        // names. The numeric suffix width depends on the number of CGUs, which
490        // is always greater than zero:
491        // - [1,9]     CGUs: `0`, `1`, `2`, ...
492        // - [10,99]   CGUs: `00`, `01`, `02`, ...
493        // - [100,999] CGUs: `000`, `001`, `002`, ...
494        // - etc.
495        //
496        // If we didn't zero-pad the sorted-by-name order would be `XYZ-cgu.0`,
497        // `XYZ-cgu.1`, `XYZ-cgu.10`, `XYZ-cgu.11`, ..., `XYZ-cgu.2`, etc.
498        codegen_units.sort_by_key(|cgu| cmp::Reverse(cgu.size_estimate()));
499        let num_digits = codegen_units.len().ilog10() as usize + 1;
500        for (index, cgu) in codegen_units.iter_mut().enumerate() {
501            // Note: `WorkItem::short_description` depends on this name ending
502            // with `-cgu.` followed by a numeric suffix. Please keep it in
503            // sync with this code.
504            let suffix = format!("{index:0num_digits$}");
505            let numbered_codegen_unit_name =
506                cgu_name_builder.build_cgu_name_no_mangle(LOCAL_CRATE, &["cgu"], Some(suffix));
507            cgu.set_name(numbered_codegen_unit_name);
508        }
509    }
510}
511
512/// Compute the combined size of all inlined items that appear in both `cgu1`
513/// and `cgu2`.
514fn compute_inlined_overlap<'tcx>(cgu1: &CodegenUnit<'tcx>, cgu2: &CodegenUnit<'tcx>) -> usize {
515    // Either order works. We pick the one that involves iterating over fewer
516    // items.
517    let (src_cgu, dst_cgu) =
518        if cgu1.items().len() <= cgu2.items().len() { (cgu1, cgu2) } else { (cgu2, cgu1) };
519
520    let mut overlap = 0;
521    for (item, data) in src_cgu.items().iter() {
522        if data.inlined && dst_cgu.items().contains_key(item) {
523            overlap += data.size_estimate;
524        }
525    }
526    overlap
527}
528
529fn internalize_symbols<'tcx>(
530    cx: &PartitioningCx<'_, 'tcx>,
531    codegen_units: &mut [CodegenUnit<'tcx>],
532    internalization_candidates: UnordSet<MonoItem<'tcx>>,
533) {
534    /// For symbol internalization, we need to know whether a symbol/mono-item
535    /// is used from outside the codegen unit it is defined in. This type is
536    /// used to keep track of that.
537    #[derive(Clone, PartialEq, Eq, Debug)]
538    enum MonoItemPlacement {
539        SingleCgu(Symbol),
540        MultipleCgus,
541    }
542
543    let mut mono_item_placements = UnordMap::default();
544    let single_codegen_unit = codegen_units.len() == 1;
545
546    if !single_codegen_unit {
547        for cgu in codegen_units.iter() {
548            for item in cgu.items().keys() {
549                // If there is more than one codegen unit, we need to keep track
550                // in which codegen units each monomorphization is placed.
551                match mono_item_placements.entry(*item) {
552                    Entry::Occupied(e) => {
553                        let placement = e.into_mut();
554                        debug_assert!(match *placement {
555                            MonoItemPlacement::SingleCgu(cgu_name) => cgu_name != cgu.name(),
556                            MonoItemPlacement::MultipleCgus => true,
557                        });
558                        *placement = MonoItemPlacement::MultipleCgus;
559                    }
560                    Entry::Vacant(e) => {
561                        e.insert(MonoItemPlacement::SingleCgu(cgu.name()));
562                    }
563                }
564            }
565        }
566    }
567
568    // For each internalization candidates in each codegen unit, check if it is
569    // used from outside its defining codegen unit.
570    for cgu in codegen_units {
571        let home_cgu = MonoItemPlacement::SingleCgu(cgu.name());
572
573        for (item, data) in cgu.items_mut() {
574            if !internalization_candidates.contains(item) {
575                // This item is no candidate for internalizing, so skip it.
576                continue;
577            }
578
579            if !single_codegen_unit {
580                debug_assert_eq!(mono_item_placements[item], home_cgu);
581
582                if cx
583                    .usage_map
584                    .get_user_items(*item)
585                    .iter()
586                    .filter_map(|user_item| {
587                        // Some user mono items might not have been
588                        // instantiated. We can safely ignore those.
589                        mono_item_placements.get(user_item)
590                    })
591                    .any(|placement| *placement != home_cgu)
592                {
593                    // Found a user from another CGU, so skip to the next item
594                    // without marking this one as internal.
595                    continue;
596                }
597            }
598
599            // If we got here, we did not find any uses from other CGUs, so
600            // it's fine to make this monomorphization internal.
601            data.linkage = Linkage::Internal;
602            data.visibility = Visibility::Default;
603        }
604    }
605}
606
607fn mark_code_coverage_dead_code_cgu<'tcx>(codegen_units: &mut [CodegenUnit<'tcx>]) {
608    assert!(!codegen_units.is_empty());
609
610    // Find the smallest CGU that has exported symbols and put the dead
611    // function stubs in that CGU. We look for exported symbols to increase
612    // the likelihood the linker won't throw away the dead functions.
613    // FIXME(#92165): In order to truly resolve this, we need to make sure
614    // the object file (CGU) containing the dead function stubs is included
615    // in the final binary. This will probably require forcing these
616    // function symbols to be included via `-u` or `/include` linker args.
617    let dead_code_cgu = codegen_units
618        .iter_mut()
619        .filter(|cgu| cgu.items().iter().any(|(_, data)| data.linkage == Linkage::External))
620        .min_by_key(|cgu| cgu.size_estimate());
621
622    // If there are no CGUs that have externally linked items, then we just
623    // pick the first CGU as a fallback.
624    let dead_code_cgu = if let Some(cgu) = dead_code_cgu { cgu } else { &mut codegen_units[0] };
625
626    dead_code_cgu.make_code_coverage_dead_code_cgu();
627}
628
629fn characteristic_def_id_of_mono_item<'tcx>(
630    tcx: TyCtxt<'tcx>,
631    mono_item: MonoItem<'tcx>,
632) -> Option<DefId> {
633    match mono_item {
634        MonoItem::Fn(instance) => {
635            let def_id = match instance.def {
636                ty::InstanceKind::Item(def) => def,
637                ty::InstanceKind::VTableShim(..)
638                | ty::InstanceKind::ReifyShim(..)
639                | ty::InstanceKind::FnPtrShim(..)
640                | ty::InstanceKind::ClosureOnceShim { .. }
641                | ty::InstanceKind::ConstructCoroutineInClosureShim { .. }
642                | ty::InstanceKind::Intrinsic(..)
643                | ty::InstanceKind::DropGlue(..)
644                | ty::InstanceKind::Virtual(..)
645                | ty::InstanceKind::CloneShim(..)
646                | ty::InstanceKind::ThreadLocalShim(..)
647                | ty::InstanceKind::FnPtrAddrShim(..)
648                | ty::InstanceKind::FutureDropPollShim(..)
649                | ty::InstanceKind::AsyncDropGlue(..)
650                | ty::InstanceKind::AsyncDropGlueCtorShim(..) => return None,
651            };
652
653            // If this is a method, we want to put it into the same module as
654            // its self-type. If the self-type does not provide a characteristic
655            // DefId, we use the location of the impl after all.
656
657            if tcx.trait_of_item(def_id).is_some() {
658                let self_ty = instance.args.type_at(0);
659                // This is a default implementation of a trait method.
660                return characteristic_def_id_of_type(self_ty).or(Some(def_id));
661            }
662
663            if let Some(impl_def_id) = tcx.impl_of_method(def_id) {
664                if tcx.sess.opts.incremental.is_some()
665                    && tcx
666                        .trait_id_of_impl(impl_def_id)
667                        .is_some_and(|def_id| tcx.is_lang_item(def_id, LangItem::Drop))
668                {
669                    // Put `Drop::drop` into the same cgu as `drop_in_place`
670                    // since `drop_in_place` is the only thing that can
671                    // call it.
672                    return None;
673                }
674
675                // This is a method within an impl, find out what the self-type is:
676                let impl_self_ty = tcx.instantiate_and_normalize_erasing_regions(
677                    instance.args,
678                    ty::TypingEnv::fully_monomorphized(),
679                    tcx.type_of(impl_def_id),
680                );
681                if let Some(def_id) = characteristic_def_id_of_type(impl_self_ty) {
682                    return Some(def_id);
683                }
684            }
685
686            Some(def_id)
687        }
688        MonoItem::Static(def_id) => Some(def_id),
689        MonoItem::GlobalAsm(item_id) => Some(item_id.owner_id.to_def_id()),
690    }
691}
692
693fn compute_codegen_unit_name(
694    tcx: TyCtxt<'_>,
695    name_builder: &mut CodegenUnitNameBuilder<'_>,
696    def_id: DefId,
697    volatile: bool,
698    cache: &mut CguNameCache,
699) -> Symbol {
700    // Find the innermost module that is not nested within a function.
701    let mut current_def_id = def_id;
702    let mut cgu_def_id = None;
703    // Walk backwards from the item we want to find the module for.
704    loop {
705        if current_def_id.is_crate_root() {
706            if cgu_def_id.is_none() {
707                // If we have not found a module yet, take the crate root.
708                cgu_def_id = Some(def_id.krate.as_def_id());
709            }
710            break;
711        } else if tcx.def_kind(current_def_id) == DefKind::Mod {
712            if cgu_def_id.is_none() {
713                cgu_def_id = Some(current_def_id);
714            }
715        } else {
716            // If we encounter something that is not a module, throw away
717            // any module that we've found so far because we now know that
718            // it is nested within something else.
719            cgu_def_id = None;
720        }
721
722        current_def_id = tcx.parent(current_def_id);
723    }
724
725    let cgu_def_id = cgu_def_id.unwrap();
726
727    *cache.entry((cgu_def_id, volatile)).or_insert_with(|| {
728        let def_path = tcx.def_path(cgu_def_id);
729
730        let components = def_path.data.iter().map(|part| match part.data.name() {
731            DefPathDataName::Named(name) => name,
732            DefPathDataName::Anon { .. } => unreachable!(),
733        });
734
735        let volatile_suffix = volatile.then_some("volatile");
736
737        name_builder.build_cgu_name(def_path.krate, components, volatile_suffix)
738    })
739}
740
741// Anything we can't find a proper codegen unit for goes into this.
742fn fallback_cgu_name(name_builder: &mut CodegenUnitNameBuilder<'_>) -> Symbol {
743    name_builder.build_cgu_name(LOCAL_CRATE, &["fallback"], Some("cgu"))
744}
745
746fn mono_item_linkage_and_visibility<'tcx>(
747    tcx: TyCtxt<'tcx>,
748    mono_item: &MonoItem<'tcx>,
749    can_be_internalized: &mut bool,
750    can_export_generics: bool,
751    always_export_generics: bool,
752) -> (Linkage, Visibility) {
753    if let Some(explicit_linkage) = mono_item.explicit_linkage(tcx) {
754        return (explicit_linkage, Visibility::Default);
755    }
756    let vis = mono_item_visibility(
757        tcx,
758        mono_item,
759        can_be_internalized,
760        can_export_generics,
761        always_export_generics,
762    );
763    (Linkage::External, vis)
764}
765
766type CguNameCache = UnordMap<(DefId, bool), Symbol>;
767
768fn static_visibility<'tcx>(
769    tcx: TyCtxt<'tcx>,
770    can_be_internalized: &mut bool,
771    def_id: DefId,
772) -> Visibility {
773    if tcx.is_reachable_non_generic(def_id) {
774        *can_be_internalized = false;
775        default_visibility(tcx, def_id, false)
776    } else {
777        Visibility::Hidden
778    }
779}
780
781fn mono_item_visibility<'tcx>(
782    tcx: TyCtxt<'tcx>,
783    mono_item: &MonoItem<'tcx>,
784    can_be_internalized: &mut bool,
785    can_export_generics: bool,
786    always_export_generics: bool,
787) -> Visibility {
788    let instance = match mono_item {
789        // This is pretty complicated; see below.
790        MonoItem::Fn(instance) => instance,
791
792        // Misc handling for generics and such, but otherwise:
793        MonoItem::Static(def_id) => return static_visibility(tcx, can_be_internalized, *def_id),
794        MonoItem::GlobalAsm(item_id) => {
795            return static_visibility(tcx, can_be_internalized, item_id.owner_id.to_def_id());
796        }
797    };
798
799    let def_id = match instance.def {
800        InstanceKind::Item(def_id)
801        | InstanceKind::DropGlue(def_id, Some(_))
802        | InstanceKind::FutureDropPollShim(def_id, _, _)
803        | InstanceKind::AsyncDropGlue(def_id, _)
804        | InstanceKind::AsyncDropGlueCtorShim(def_id, _) => def_id,
805
806        // We match the visibility of statics here
807        InstanceKind::ThreadLocalShim(def_id) => {
808            return static_visibility(tcx, can_be_internalized, def_id);
809        }
810
811        // These are all compiler glue and such, never exported, always hidden.
812        InstanceKind::VTableShim(..)
813        | InstanceKind::ReifyShim(..)
814        | InstanceKind::FnPtrShim(..)
815        | InstanceKind::Virtual(..)
816        | InstanceKind::Intrinsic(..)
817        | InstanceKind::ClosureOnceShim { .. }
818        | InstanceKind::ConstructCoroutineInClosureShim { .. }
819        | InstanceKind::DropGlue(..)
820        | InstanceKind::CloneShim(..)
821        | InstanceKind::FnPtrAddrShim(..) => return Visibility::Hidden,
822    };
823
824    // The `start_fn` lang item is actually a monomorphized instance of a
825    // function in the standard library, used for the `main` function. We don't
826    // want to export it so we tag it with `Hidden` visibility but this symbol
827    // is only referenced from the actual `main` symbol which we unfortunately
828    // don't know anything about during partitioning/collection. As a result we
829    // forcibly keep this symbol out of the `internalization_candidates` set.
830    //
831    // FIXME: eventually we don't want to always force this symbol to have
832    //        hidden visibility, it should indeed be a candidate for
833    //        internalization, but we have to understand that it's referenced
834    //        from the `main` symbol we'll generate later.
835    //
836    //        This may be fixable with a new `InstanceKind` perhaps? Unsure!
837    if tcx.is_lang_item(def_id, LangItem::Start) {
838        *can_be_internalized = false;
839        return Visibility::Hidden;
840    }
841
842    let is_generic = instance.args.non_erasable_generics().next().is_some();
843
844    // Upstream `DefId` instances get different handling than local ones.
845    let Some(def_id) = def_id.as_local() else {
846        return if is_generic
847            && (always_export_generics
848                || (can_export_generics
849                    && tcx.codegen_fn_attrs(def_id).inline == InlineAttr::Never))
850        {
851            // If it is an upstream monomorphization and we export generics, we must make
852            // it available to downstream crates.
853            *can_be_internalized = false;
854            default_visibility(tcx, def_id, true)
855        } else {
856            Visibility::Hidden
857        };
858    };
859
860    if is_generic {
861        if always_export_generics
862            || (can_export_generics && tcx.codegen_fn_attrs(def_id).inline == InlineAttr::Never)
863        {
864            if tcx.is_unreachable_local_definition(def_id) {
865                // This instance cannot be used from another crate.
866                Visibility::Hidden
867            } else {
868                // This instance might be useful in a downstream crate.
869                *can_be_internalized = false;
870                default_visibility(tcx, def_id.to_def_id(), true)
871            }
872        } else {
873            // We are not exporting generics or the definition is not reachable
874            // for downstream crates, we can internalize its instantiations.
875            Visibility::Hidden
876        }
877    } else {
878        // If this isn't a generic function then we mark this a `Default` if
879        // this is a reachable item, meaning that it's a symbol other crates may
880        // use when they link to us.
881        if tcx.is_reachable_non_generic(def_id.to_def_id()) {
882            *can_be_internalized = false;
883            debug_assert!(!is_generic);
884            return default_visibility(tcx, def_id.to_def_id(), false);
885        }
886
887        // If this isn't reachable then we're gonna tag this with `Hidden`
888        // visibility. In some situations though we'll want to prevent this
889        // symbol from being internalized.
890        //
891        // There's two categories of items here:
892        //
893        // * First is weak lang items. These are basically mechanisms for
894        //   libcore to forward-reference symbols defined later in crates like
895        //   the standard library or `#[panic_handler]` definitions. The
896        //   definition of these weak lang items needs to be referencable by
897        //   libcore, so we're no longer a candidate for internalization.
898        //   Removal of these functions can't be done by LLVM but rather must be
899        //   done by the linker as it's a non-local decision.
900        //
901        // * Second is "std internal symbols". Currently this is primarily used
902        //   for allocator symbols. Allocators are a little weird in their
903        //   implementation, but the idea is that the compiler, at the last
904        //   minute, defines an allocator with an injected object file. The
905        //   `alloc` crate references these symbols (`__rust_alloc`) and the
906        //   definition doesn't get hooked up until a linked crate artifact is
907        //   generated.
908        //
909        //   The symbols synthesized by the compiler (`__rust_alloc`) are thin
910        //   veneers around the actual implementation, some other symbol which
911        //   implements the same ABI. These symbols (things like `__rg_alloc`,
912        //   `__rdl_alloc`, `__rde_alloc`, etc), are all tagged with "std
913        //   internal symbols".
914        //
915        //   The std-internal symbols here **should not show up in a dll as an
916        //   exported interface**, so they return `false` from
917        //   `is_reachable_non_generic` above and we'll give them `Hidden`
918        //   visibility below. Like the weak lang items, though, we can't let
919        //   LLVM internalize them as this decision is left up to the linker to
920        //   omit them, so prevent them from being internalized.
921        let attrs = tcx.codegen_fn_attrs(def_id);
922        if attrs.flags.contains(CodegenFnAttrFlags::RUSTC_STD_INTERNAL_SYMBOL) {
923            *can_be_internalized = false;
924        }
925
926        Visibility::Hidden
927    }
928}
929
930fn default_visibility(tcx: TyCtxt<'_>, id: DefId, is_generic: bool) -> Visibility {
931    // Fast-path to avoid expensive query call below
932    if tcx.sess.default_visibility() == SymbolVisibility::Interposable {
933        return Visibility::Default;
934    }
935
936    let export_level = if is_generic {
937        // Generic functions never have export-level C.
938        SymbolExportLevel::Rust
939    } else {
940        match tcx.reachable_non_generics(id.krate).get(&id) {
941            Some(SymbolExportInfo { level: SymbolExportLevel::C, .. }) => SymbolExportLevel::C,
942            _ => SymbolExportLevel::Rust,
943        }
944    };
945
946    match export_level {
947        // C-export level items remain at `Default` to allow C code to
948        // access and interpose them.
949        SymbolExportLevel::C => Visibility::Default,
950
951        // For all other symbols, `default_visibility` determines which visibility to use.
952        SymbolExportLevel::Rust => tcx.sess.default_visibility().into(),
953    }
954}
955
956fn debug_dump<'a, 'tcx: 'a>(tcx: TyCtxt<'tcx>, label: &str, cgus: &[CodegenUnit<'tcx>]) {
957    let dump = move || {
958        use std::fmt::Write;
959
960        let mut num_cgus = 0;
961        let mut all_cgu_sizes = Vec::new();
962
963        // Note: every unique root item is placed exactly once, so the number
964        // of unique root items always equals the number of placed root items.
965        //
966        // Also, unreached inlined items won't be counted here. This is fine.
967
968        let mut inlined_items = UnordSet::default();
969
970        let mut root_items = 0;
971        let mut unique_inlined_items = 0;
972        let mut placed_inlined_items = 0;
973
974        let mut root_size = 0;
975        let mut unique_inlined_size = 0;
976        let mut placed_inlined_size = 0;
977
978        for cgu in cgus.iter() {
979            num_cgus += 1;
980            all_cgu_sizes.push(cgu.size_estimate());
981
982            for (item, data) in cgu.items() {
983                if !data.inlined {
984                    root_items += 1;
985                    root_size += data.size_estimate;
986                } else {
987                    if inlined_items.insert(item) {
988                        unique_inlined_items += 1;
989                        unique_inlined_size += data.size_estimate;
990                    }
991                    placed_inlined_items += 1;
992                    placed_inlined_size += data.size_estimate;
993                }
994            }
995        }
996
997        all_cgu_sizes.sort_unstable_by_key(|&n| cmp::Reverse(n));
998
999        let unique_items = root_items + unique_inlined_items;
1000        let placed_items = root_items + placed_inlined_items;
1001        let items_ratio = placed_items as f64 / unique_items as f64;
1002
1003        let unique_size = root_size + unique_inlined_size;
1004        let placed_size = root_size + placed_inlined_size;
1005        let size_ratio = placed_size as f64 / unique_size as f64;
1006
1007        let mean_cgu_size = placed_size as f64 / num_cgus as f64;
1008
1009        assert_eq!(placed_size, all_cgu_sizes.iter().sum::<usize>());
1010
1011        let s = &mut String::new();
1012        let _ = writeln!(s, "{label}");
1013        let _ = writeln!(
1014            s,
1015            "- unique items: {unique_items} ({root_items} root + {unique_inlined_items} inlined), \
1016               unique size: {unique_size} ({root_size} root + {unique_inlined_size} inlined)\n\
1017             - placed items: {placed_items} ({root_items} root + {placed_inlined_items} inlined), \
1018               placed size: {placed_size} ({root_size} root + {placed_inlined_size} inlined)\n\
1019             - placed/unique items ratio: {items_ratio:.2}, \
1020               placed/unique size ratio: {size_ratio:.2}\n\
1021             - CGUs: {num_cgus}, mean size: {mean_cgu_size:.1}, sizes: {}",
1022            list(&all_cgu_sizes),
1023        );
1024        let _ = writeln!(s);
1025
1026        for (i, cgu) in cgus.iter().enumerate() {
1027            let name = cgu.name();
1028            let size = cgu.size_estimate();
1029            let num_items = cgu.items().len();
1030            let mean_size = size as f64 / num_items as f64;
1031
1032            let mut placed_item_sizes: Vec<_> =
1033                cgu.items().values().map(|data| data.size_estimate).collect();
1034            placed_item_sizes.sort_unstable_by_key(|&n| cmp::Reverse(n));
1035            let sizes = list(&placed_item_sizes);
1036
1037            let _ = writeln!(s, "- CGU[{i}]");
1038            let _ = writeln!(s, "  - {name}, size: {size}");
1039            let _ =
1040                writeln!(s, "  - items: {num_items}, mean size: {mean_size:.1}, sizes: {sizes}",);
1041
1042            for (item, data) in cgu.items_in_deterministic_order(tcx) {
1043                let linkage = data.linkage;
1044                let symbol_name = item.symbol_name(tcx).name;
1045                let symbol_hash_start = symbol_name.rfind('h');
1046                let symbol_hash = symbol_hash_start.map_or("<no hash>", |i| &symbol_name[i..]);
1047                let kind = if !data.inlined { "root" } else { "inlined" };
1048                let size = data.size_estimate;
1049                let _ = with_no_trimmed_paths!(writeln!(
1050                    s,
1051                    "  - {item} [{linkage:?}] [{symbol_hash}] ({kind}, size: {size})"
1052                ));
1053            }
1054
1055            let _ = writeln!(s);
1056        }
1057
1058        return std::mem::take(s);
1059
1060        // Converts a slice to a string, capturing repetitions to save space.
1061        // E.g. `[4, 4, 4, 3, 2, 1, 1, 1, 1, 1]` -> "[4 (x3), 3, 2, 1 (x5)]".
1062        fn list(ns: &[usize]) -> String {
1063            let mut v = Vec::new();
1064            if ns.is_empty() {
1065                return "[]".to_string();
1066            }
1067
1068            let mut elem = |curr, curr_count| {
1069                if curr_count == 1 {
1070                    v.push(format!("{curr}"));
1071                } else {
1072                    v.push(format!("{curr} (x{curr_count})"));
1073                }
1074            };
1075
1076            let mut curr = ns[0];
1077            let mut curr_count = 1;
1078
1079            for &n in &ns[1..] {
1080                if n != curr {
1081                    elem(curr, curr_count);
1082                    curr = n;
1083                    curr_count = 1;
1084                } else {
1085                    curr_count += 1;
1086                }
1087            }
1088            elem(curr, curr_count);
1089
1090            format!("[{}]", v.join(", "))
1091        }
1092    };
1093
1094    debug!("{}", dump());
1095}
1096
1097#[inline(never)] // give this a place in the profiler
1098fn assert_symbols_are_distinct<'a, 'tcx, I>(tcx: TyCtxt<'tcx>, mono_items: I)
1099where
1100    I: Iterator<Item = &'a MonoItem<'tcx>>,
1101    'tcx: 'a,
1102{
1103    let _prof_timer = tcx.prof.generic_activity("assert_symbols_are_distinct");
1104
1105    let mut symbols: Vec<_> =
1106        mono_items.map(|mono_item| (mono_item, mono_item.symbol_name(tcx))).collect();
1107
1108    symbols.sort_by_key(|sym| sym.1);
1109
1110    for &[(mono_item1, ref sym1), (mono_item2, ref sym2)] in symbols.array_windows() {
1111        if sym1 == sym2 {
1112            let span1 = mono_item1.local_span(tcx);
1113            let span2 = mono_item2.local_span(tcx);
1114
1115            // Deterministically select one of the spans for error reporting
1116            let span = match (span1, span2) {
1117                (Some(span1), Some(span2)) => {
1118                    Some(if span1.lo().0 > span2.lo().0 { span1 } else { span2 })
1119                }
1120                (span1, span2) => span1.or(span2),
1121            };
1122
1123            tcx.dcx().emit_fatal(SymbolAlreadyDefined { span, symbol: sym1.to_string() });
1124        }
1125    }
1126}
1127
1128fn collect_and_partition_mono_items(tcx: TyCtxt<'_>, (): ()) -> MonoItemPartitions<'_> {
1129    let collection_strategy = if tcx.sess.link_dead_code() {
1130        MonoItemCollectionStrategy::Eager
1131    } else {
1132        MonoItemCollectionStrategy::Lazy
1133    };
1134
1135    let (items, usage_map) = collector::collect_crate_mono_items(tcx, collection_strategy);
1136
1137    // If there was an error during collection (e.g. from one of the constants we evaluated),
1138    // then we stop here. This way codegen does not have to worry about failing constants.
1139    // (codegen relies on this and ICEs will happen if this is violated.)
1140    tcx.dcx().abort_if_errors();
1141
1142    let (codegen_units, _) = tcx.sess.time("partition_and_assert_distinct_symbols", || {
1143        sync::join(
1144            || {
1145                let mut codegen_units = partition(tcx, items.iter().copied(), &usage_map);
1146                codegen_units[0].make_primary();
1147                &*tcx.arena.alloc_from_iter(codegen_units)
1148            },
1149            || assert_symbols_are_distinct(tcx, items.iter()),
1150        )
1151    });
1152
1153    if tcx.prof.enabled() {
1154        // Record CGU size estimates for self-profiling.
1155        for cgu in codegen_units {
1156            tcx.prof.artifact_size(
1157                "codegen_unit_size_estimate",
1158                cgu.name().as_str(),
1159                cgu.size_estimate() as u64,
1160            );
1161        }
1162    }
1163
1164    #[cfg(not(llvm_enzyme))]
1165    let autodiff_mono_items: Vec<_> = vec![];
1166    #[cfg(llvm_enzyme)]
1167    let mut autodiff_mono_items: Vec<_> = vec![];
1168    let mono_items: DefIdSet = items
1169        .iter()
1170        .filter_map(|mono_item| match *mono_item {
1171            MonoItem::Fn(ref instance) => {
1172                #[cfg(llvm_enzyme)]
1173                autodiff_mono_items.push((mono_item, instance));
1174                Some(instance.def_id())
1175            }
1176            MonoItem::Static(def_id) => Some(def_id),
1177            _ => None,
1178        })
1179        .collect();
1180
1181    let autodiff_items =
1182        autodiff::find_autodiff_source_functions(tcx, &usage_map, autodiff_mono_items);
1183    let autodiff_items = tcx.arena.alloc_from_iter(autodiff_items);
1184
1185    // Output monomorphization stats per def_id
1186    if let SwitchWithOptPath::Enabled(ref path) = tcx.sess.opts.unstable_opts.dump_mono_stats {
1187        if let Err(err) =
1188            dump_mono_items_stats(tcx, codegen_units, path, tcx.crate_name(LOCAL_CRATE))
1189        {
1190            tcx.dcx().emit_fatal(CouldntDumpMonoStats { error: err.to_string() });
1191        }
1192    }
1193
1194    if tcx.sess.opts.unstable_opts.print_mono_items {
1195        let mut item_to_cgus: UnordMap<_, Vec<_>> = Default::default();
1196
1197        for cgu in codegen_units {
1198            for (&mono_item, &data) in cgu.items() {
1199                item_to_cgus.entry(mono_item).or_default().push((cgu.name(), data.linkage));
1200            }
1201        }
1202
1203        let mut item_keys: Vec<_> = items
1204            .iter()
1205            .map(|i| {
1206                let mut output = with_no_trimmed_paths!(i.to_string());
1207                output.push_str(" @@");
1208                let mut empty = Vec::new();
1209                let cgus = item_to_cgus.get_mut(i).unwrap_or(&mut empty);
1210                cgus.sort_by_key(|(name, _)| *name);
1211                cgus.dedup();
1212                for &(ref cgu_name, linkage) in cgus.iter() {
1213                    output.push(' ');
1214                    output.push_str(cgu_name.as_str());
1215
1216                    let linkage_abbrev = match linkage {
1217                        Linkage::External => "External",
1218                        Linkage::AvailableExternally => "Available",
1219                        Linkage::LinkOnceAny => "OnceAny",
1220                        Linkage::LinkOnceODR => "OnceODR",
1221                        Linkage::WeakAny => "WeakAny",
1222                        Linkage::WeakODR => "WeakODR",
1223                        Linkage::Internal => "Internal",
1224                        Linkage::ExternalWeak => "ExternalWeak",
1225                        Linkage::Common => "Common",
1226                    };
1227
1228                    output.push('[');
1229                    output.push_str(linkage_abbrev);
1230                    output.push(']');
1231                }
1232                output
1233            })
1234            .collect();
1235
1236        item_keys.sort();
1237
1238        for item in item_keys {
1239            println!("MONO_ITEM {item}");
1240        }
1241    }
1242
1243    MonoItemPartitions {
1244        all_mono_items: tcx.arena.alloc(mono_items),
1245        codegen_units,
1246        autodiff_items,
1247    }
1248}
1249
1250/// Outputs stats about instantiation counts and estimated size, per `MonoItem`'s
1251/// def, to a file in the given output directory.
1252fn dump_mono_items_stats<'tcx>(
1253    tcx: TyCtxt<'tcx>,
1254    codegen_units: &[CodegenUnit<'tcx>],
1255    output_directory: &Option<PathBuf>,
1256    crate_name: Symbol,
1257) -> Result<(), Box<dyn std::error::Error>> {
1258    let output_directory = if let Some(directory) = output_directory {
1259        fs::create_dir_all(directory)?;
1260        directory
1261    } else {
1262        Path::new(".")
1263    };
1264
1265    let format = tcx.sess.opts.unstable_opts.dump_mono_stats_format;
1266    let ext = format.extension();
1267    let filename = format!("{crate_name}.mono_items.{ext}");
1268    let output_path = output_directory.join(&filename);
1269    let mut file = File::create_buffered(&output_path)?;
1270
1271    // Gather instantiated mono items grouped by def_id
1272    let mut items_per_def_id: FxIndexMap<_, Vec<_>> = Default::default();
1273    for cgu in codegen_units {
1274        cgu.items()
1275            .keys()
1276            // Avoid variable-sized compiler-generated shims
1277            .filter(|mono_item| mono_item.is_user_defined())
1278            .for_each(|mono_item| {
1279                items_per_def_id.entry(mono_item.def_id()).or_default().push(mono_item);
1280            });
1281    }
1282
1283    #[derive(serde::Serialize)]
1284    struct MonoItem {
1285        name: String,
1286        instantiation_count: usize,
1287        size_estimate: usize,
1288        total_estimate: usize,
1289    }
1290
1291    // Output stats sorted by total instantiated size, from heaviest to lightest
1292    let mut stats: Vec<_> = items_per_def_id
1293        .into_iter()
1294        .map(|(def_id, items)| {
1295            let name = with_no_trimmed_paths!(tcx.def_path_str(def_id));
1296            let instantiation_count = items.len();
1297            let size_estimate = items[0].size_estimate(tcx);
1298            let total_estimate = instantiation_count * size_estimate;
1299            MonoItem { name, instantiation_count, size_estimate, total_estimate }
1300        })
1301        .collect();
1302    stats.sort_unstable_by_key(|item| cmp::Reverse(item.total_estimate));
1303
1304    if !stats.is_empty() {
1305        match format {
1306            DumpMonoStatsFormat::Json => serde_json::to_writer(file, &stats)?,
1307            DumpMonoStatsFormat::Markdown => {
1308                writeln!(
1309                    file,
1310                    "| Item | Instantiation count | Estimated Cost Per Instantiation | Total Estimated Cost |"
1311                )?;
1312                writeln!(file, "| --- | ---: | ---: | ---: |")?;
1313
1314                for MonoItem { name, instantiation_count, size_estimate, total_estimate } in stats {
1315                    writeln!(
1316                        file,
1317                        "| `{name}` | {instantiation_count} | {size_estimate} | {total_estimate} |"
1318                    )?;
1319                }
1320            }
1321        }
1322    }
1323
1324    Ok(())
1325}
1326
1327pub(crate) fn provide(providers: &mut Providers) {
1328    providers.collect_and_partition_mono_items = collect_and_partition_mono_items;
1329
1330    providers.is_codegened_item =
1331        |tcx, def_id| tcx.collect_and_partition_mono_items(()).all_mono_items.contains(&def_id);
1332
1333    providers.codegen_unit = |tcx, name| {
1334        tcx.collect_and_partition_mono_items(())
1335            .codegen_units
1336            .iter()
1337            .find(|cgu| cgu.name() == name)
1338            .unwrap_or_else(|| panic!("failed to find cgu with name {name:?}"))
1339    };
1340
1341    providers.size_estimate = |tcx, instance| {
1342        match instance.def {
1343            // "Normal" functions size estimate: the number of
1344            // statements, plus one for the terminator.
1345            InstanceKind::Item(..)
1346            | InstanceKind::DropGlue(..)
1347            | InstanceKind::AsyncDropGlueCtorShim(..) => {
1348                let mir = tcx.instance_mir(instance.def);
1349                mir.basic_blocks.iter().map(|bb| bb.statements.len() + 1).sum()
1350            }
1351            // Other compiler-generated shims size estimate: 1
1352            _ => 1,
1353        }
1354    };
1355
1356    collector::provide(providers);
1357}