rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use interpret::ErrorHandled;
87use rustc_data_structures::fx::FxHashMap;
88use rustc_hir::HirId;
89use rustc_index::{IndexSlice, IndexVec};
90use rustc_middle::middle::region;
91use rustc_middle::mir::{self, *};
92use rustc_middle::thir::{AdtExpr, AdtExprBase, ArmId, ExprId, ExprKind, LintLevel};
93use rustc_middle::ty::{self, Ty, TyCtxt, TypeVisitableExt, ValTree};
94use rustc_middle::{bug, span_bug};
95use rustc_pattern_analysis::rustc::RustcPatCtxt;
96use rustc_session::lint::Level;
97use rustc_span::source_map::Spanned;
98use rustc_span::{DUMMY_SP, Span};
99use tracing::{debug, instrument};
100
101use super::matches::BuiltMatchTree;
102use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
103use crate::errors::{
104    ConstContinueBadConst, ConstContinueNotMonomorphicConst, ConstContinueUnknownJumpTarget,
105};
106
107#[derive(Debug)]
108pub(crate) struct Scopes<'tcx> {
109    scopes: Vec<Scope>,
110
111    /// The current set of breakable scopes. See module comment for more details.
112    breakable_scopes: Vec<BreakableScope<'tcx>>,
113
114    const_continuable_scopes: Vec<ConstContinuableScope<'tcx>>,
115
116    /// The scope of the innermost if-then currently being lowered.
117    if_then_scope: Option<IfThenScope>,
118
119    /// Drops that need to be done on unwind paths. See the comment on
120    /// [DropTree] for more details.
121    unwind_drops: DropTree,
122
123    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
124    coroutine_drops: DropTree,
125}
126
127#[derive(Debug)]
128struct Scope {
129    /// The source scope this scope was created in.
130    source_scope: SourceScope,
131
132    /// the region span of this scope within source code.
133    region_scope: region::Scope,
134
135    /// set of places to drop when exiting this scope. This starts
136    /// out empty but grows as variables are declared during the
137    /// building process. This is a stack, so we always drop from the
138    /// end of the vector (top of the stack) first.
139    drops: Vec<DropData>,
140
141    moved_locals: Vec<Local>,
142
143    /// The drop index that will drop everything in and below this scope on an
144    /// unwind path.
145    cached_unwind_block: Option<DropIdx>,
146
147    /// The drop index that will drop everything in and below this scope on a
148    /// coroutine drop path.
149    cached_coroutine_drop_block: Option<DropIdx>,
150}
151
152#[derive(Clone, Copy, Debug)]
153struct DropData {
154    /// The `Span` where drop obligation was incurred (typically where place was
155    /// declared)
156    source_info: SourceInfo,
157
158    /// local to drop
159    local: Local,
160
161    /// Whether this is a value Drop or a StorageDead.
162    kind: DropKind,
163}
164
165#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
166pub(crate) enum DropKind {
167    Value,
168    Storage,
169    ForLint,
170}
171
172#[derive(Debug)]
173struct BreakableScope<'tcx> {
174    /// Region scope of the loop
175    region_scope: region::Scope,
176    /// The destination of the loop/block expression itself (i.e., where to put
177    /// the result of a `break` or `return` expression)
178    break_destination: Place<'tcx>,
179    /// Drops that happen on the `break`/`return` path.
180    break_drops: DropTree,
181    /// Drops that happen on the `continue` path.
182    continue_drops: Option<DropTree>,
183}
184
185#[derive(Debug)]
186struct ConstContinuableScope<'tcx> {
187    /// The scope for the `#[loop_match]` which its `#[const_continue]`s will jump to.
188    region_scope: region::Scope,
189    /// The place of the state of a `#[loop_match]`, which a `#[const_continue]` must update.
190    state_place: Place<'tcx>,
191
192    arms: Box<[ArmId]>,
193    built_match_tree: BuiltMatchTree<'tcx>,
194
195    /// Drops that happen on a `#[const_continue]`
196    const_continue_drops: DropTree,
197}
198
199#[derive(Debug)]
200struct IfThenScope {
201    /// The if-then scope or arm scope
202    region_scope: region::Scope,
203    /// Drops that happen on the `else` path.
204    else_drops: DropTree,
205}
206
207/// The target of an expression that breaks out of a scope
208#[derive(Clone, Copy, Debug)]
209pub(crate) enum BreakableTarget {
210    Continue(region::Scope),
211    Break(region::Scope),
212    Return,
213}
214
215rustc_index::newtype_index! {
216    #[orderable]
217    struct DropIdx {}
218}
219
220const ROOT_NODE: DropIdx = DropIdx::ZERO;
221
222/// A tree of drops that we have deferred lowering. It's used for:
223///
224/// * Drops on unwind paths
225/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
226/// * Drops on return and loop exit paths
227/// * Drops on the else path in an `if let` chain
228///
229/// Once no more nodes could be added to the tree, we lower it to MIR in one go
230/// in `build_mir`.
231#[derive(Debug)]
232struct DropTree {
233    /// Nodes in the drop tree, containing drop data and a link to the next node.
234    drop_nodes: IndexVec<DropIdx, DropNode>,
235    /// Map for finding the index of an existing node, given its contents.
236    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
237    /// Edges into the `DropTree` that need to be added once it's lowered.
238    entry_points: Vec<(DropIdx, BasicBlock)>,
239}
240
241/// A single node in the drop tree.
242#[derive(Debug)]
243struct DropNode {
244    /// Info about the drop to be performed at this node in the drop tree.
245    data: DropData,
246    /// Index of the "next" drop to perform (in drop order, not declaration order).
247    next: DropIdx,
248}
249
250/// Subset of [`DropNode`] used for reverse lookup in a hash table.
251#[derive(Debug, PartialEq, Eq, Hash)]
252struct DropNodeKey {
253    next: DropIdx,
254    local: Local,
255}
256
257impl Scope {
258    /// Whether there's anything to do for the cleanup path, that is,
259    /// when unwinding through this scope. This includes destructors,
260    /// but not StorageDead statements, which don't get emitted at all
261    /// for unwinding, for several reasons:
262    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
263    ///  * LLVM's memory dependency analysis can't handle it atm
264    ///  * polluting the cleanup MIR with StorageDead creates
265    ///    landing pads even though there's no actual destructors
266    ///  * freeing up stack space has no effect during unwinding
267    /// Note that for coroutines we do emit StorageDeads, for the
268    /// use of optimizations in the MIR coroutine transform.
269    fn needs_cleanup(&self) -> bool {
270        self.drops.iter().any(|drop| match drop.kind {
271            DropKind::Value | DropKind::ForLint => true,
272            DropKind::Storage => false,
273        })
274    }
275
276    fn invalidate_cache(&mut self) {
277        self.cached_unwind_block = None;
278        self.cached_coroutine_drop_block = None;
279    }
280}
281
282/// A trait that determined how [DropTree] creates its blocks and
283/// links to any entry nodes.
284trait DropTreeBuilder<'tcx> {
285    /// Create a new block for the tree. This should call either
286    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
287    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
288
289    /// Links a block outside the drop tree, `from`, to the block `to` inside
290    /// the drop tree.
291    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
292}
293
294impl DropTree {
295    fn new() -> Self {
296        // The root node of the tree doesn't represent a drop, but instead
297        // represents the block in the tree that should be jumped to once all
298        // of the required drops have been performed.
299        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
300        let fake_data =
301            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
302        let drop_nodes = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
303        Self { drop_nodes, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
304    }
305
306    /// Adds a node to the drop tree, consisting of drop data and the index of
307    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
308    ///
309    /// If there is already an equivalent node in the tree, nothing is added, and
310    /// that node's index is returned. Otherwise, the new node's index is returned.
311    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
312        let drop_nodes = &mut self.drop_nodes;
313        *self
314            .existing_drops_map
315            .entry(DropNodeKey { next, local: data.local })
316            // Create a new node, and also add its index to the map.
317            .or_insert_with(|| drop_nodes.push(DropNode { data, next }))
318    }
319
320    /// Registers `from` as an entry point to this drop tree, at `to`.
321    ///
322    /// During [`Self::build_mir`], `from` will be linked to the corresponding
323    /// block within the drop tree.
324    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
325        debug_assert!(to < self.drop_nodes.next_index());
326        self.entry_points.push((to, from));
327    }
328
329    /// Builds the MIR for a given drop tree.
330    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
331        &mut self,
332        cfg: &mut CFG<'tcx>,
333        root_node: Option<BasicBlock>,
334    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
335        debug!("DropTree::build_mir(drops = {:#?})", self);
336
337        let mut blocks = self.assign_blocks::<T>(cfg, root_node);
338        self.link_blocks(cfg, &mut blocks);
339
340        blocks
341    }
342
343    /// Assign blocks for all of the drops in the drop tree that need them.
344    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
345        &mut self,
346        cfg: &mut CFG<'tcx>,
347        root_node: Option<BasicBlock>,
348    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
349        // StorageDead statements can share blocks with each other and also with
350        // a Drop terminator. We iterate through the drops to find which drops
351        // need their own block.
352        #[derive(Clone, Copy)]
353        enum Block {
354            // This drop is unreachable
355            None,
356            // This drop is only reachable through the `StorageDead` with the
357            // specified index.
358            Shares(DropIdx),
359            // This drop has more than one way of being reached, or it is
360            // branched to from outside the tree, or its predecessor is a
361            // `Value` drop.
362            Own,
363        }
364
365        let mut blocks = IndexVec::from_elem(None, &self.drop_nodes);
366        blocks[ROOT_NODE] = root_node;
367
368        let mut needs_block = IndexVec::from_elem(Block::None, &self.drop_nodes);
369        if root_node.is_some() {
370            // In some cases (such as drops for `continue`) the root node
371            // already has a block. In this case, make sure that we don't
372            // override it.
373            needs_block[ROOT_NODE] = Block::Own;
374        }
375
376        // Sort so that we only need to check the last value.
377        let entry_points = &mut self.entry_points;
378        entry_points.sort();
379
380        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
381            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
382                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
383                needs_block[drop_idx] = Block::Own;
384                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
385                    let entry_block = entry_points.pop().unwrap().1;
386                    T::link_entry_point(cfg, entry_block, block);
387                }
388            }
389            match needs_block[drop_idx] {
390                Block::None => continue,
391                Block::Own => {
392                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
393                }
394                Block::Shares(pred) => {
395                    blocks[drop_idx] = blocks[pred];
396                }
397            }
398            if let DropKind::Value = drop_node.data.kind {
399                needs_block[drop_node.next] = Block::Own;
400            } else if drop_idx != ROOT_NODE {
401                match &mut needs_block[drop_node.next] {
402                    pred @ Block::None => *pred = Block::Shares(drop_idx),
403                    pred @ Block::Shares(_) => *pred = Block::Own,
404                    Block::Own => (),
405                }
406            }
407        }
408
409        debug!("assign_blocks: blocks = {:#?}", blocks);
410        assert!(entry_points.is_empty());
411
412        blocks
413    }
414
415    fn link_blocks<'tcx>(
416        &self,
417        cfg: &mut CFG<'tcx>,
418        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
419    ) {
420        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
421            let Some(block) = blocks[drop_idx] else { continue };
422            match drop_node.data.kind {
423                DropKind::Value => {
424                    let terminator = TerminatorKind::Drop {
425                        target: blocks[drop_node.next].unwrap(),
426                        // The caller will handle this if needed.
427                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
428                        place: drop_node.data.local.into(),
429                        replace: false,
430                        drop: None,
431                        async_fut: None,
432                    };
433                    cfg.terminate(block, drop_node.data.source_info, terminator);
434                }
435                DropKind::ForLint => {
436                    let stmt = Statement::new(
437                        drop_node.data.source_info,
438                        StatementKind::BackwardIncompatibleDropHint {
439                            place: Box::new(drop_node.data.local.into()),
440                            reason: BackwardIncompatibleDropReason::Edition2024,
441                        },
442                    );
443                    cfg.push(block, stmt);
444                    let target = blocks[drop_node.next].unwrap();
445                    if target != block {
446                        // Diagnostics don't use this `Span` but debuginfo
447                        // might. Since we don't want breakpoints to be placed
448                        // here, especially when this is on an unwind path, we
449                        // use `DUMMY_SP`.
450                        let source_info =
451                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
452                        let terminator = TerminatorKind::Goto { target };
453                        cfg.terminate(block, source_info, terminator);
454                    }
455                }
456                // Root nodes don't correspond to a drop.
457                DropKind::Storage if drop_idx == ROOT_NODE => {}
458                DropKind::Storage => {
459                    let stmt = Statement::new(
460                        drop_node.data.source_info,
461                        StatementKind::StorageDead(drop_node.data.local),
462                    );
463                    cfg.push(block, stmt);
464                    let target = blocks[drop_node.next].unwrap();
465                    if target != block {
466                        // Diagnostics don't use this `Span` but debuginfo
467                        // might. Since we don't want breakpoints to be placed
468                        // here, especially when this is on an unwind path, we
469                        // use `DUMMY_SP`.
470                        let source_info =
471                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
472                        let terminator = TerminatorKind::Goto { target };
473                        cfg.terminate(block, source_info, terminator);
474                    }
475                }
476            }
477        }
478    }
479}
480
481impl<'tcx> Scopes<'tcx> {
482    pub(crate) fn new() -> Self {
483        Self {
484            scopes: Vec::new(),
485            breakable_scopes: Vec::new(),
486            const_continuable_scopes: Vec::new(),
487            if_then_scope: None,
488            unwind_drops: DropTree::new(),
489            coroutine_drops: DropTree::new(),
490        }
491    }
492
493    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
494        debug!("push_scope({:?})", region_scope);
495        self.scopes.push(Scope {
496            source_scope: vis_scope,
497            region_scope: region_scope.0,
498            drops: vec![],
499            moved_locals: vec![],
500            cached_unwind_block: None,
501            cached_coroutine_drop_block: None,
502        });
503    }
504
505    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
506        let scope = self.scopes.pop().unwrap();
507        assert_eq!(scope.region_scope, region_scope.0);
508        scope
509    }
510
511    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
512        self.scopes
513            .iter()
514            .rposition(|scope| scope.region_scope == region_scope)
515            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
516    }
517
518    /// Returns the topmost active scope, which is known to be alive until
519    /// the next scope expression.
520    fn topmost(&self) -> region::Scope {
521        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
522    }
523}
524
525impl<'a, 'tcx> Builder<'a, 'tcx> {
526    // Adding and removing scopes
527    // ==========================
528
529    ///  Start a breakable scope, which tracks where `continue`, `break` and
530    ///  `return` should branch to.
531    pub(crate) fn in_breakable_scope<F>(
532        &mut self,
533        loop_block: Option<BasicBlock>,
534        break_destination: Place<'tcx>,
535        span: Span,
536        f: F,
537    ) -> BlockAnd<()>
538    where
539        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
540    {
541        let region_scope = self.scopes.topmost();
542        let scope = BreakableScope {
543            region_scope,
544            break_destination,
545            break_drops: DropTree::new(),
546            continue_drops: loop_block.map(|_| DropTree::new()),
547        };
548        self.scopes.breakable_scopes.push(scope);
549        let normal_exit_block = f(self);
550        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
551        assert!(breakable_scope.region_scope == region_scope);
552        let break_block =
553            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
554        if let Some(drops) = breakable_scope.continue_drops {
555            self.build_exit_tree(drops, region_scope, span, loop_block);
556        }
557        match (normal_exit_block, break_block) {
558            (Some(block), None) | (None, Some(block)) => block,
559            (None, None) => self.cfg.start_new_block().unit(),
560            (Some(normal_block), Some(exit_block)) => {
561                let target = self.cfg.start_new_block();
562                let source_info = self.source_info(span);
563                self.cfg.terminate(
564                    normal_block.into_block(),
565                    source_info,
566                    TerminatorKind::Goto { target },
567                );
568                self.cfg.terminate(
569                    exit_block.into_block(),
570                    source_info,
571                    TerminatorKind::Goto { target },
572                );
573                target.unit()
574            }
575        }
576    }
577
578    /// Start a const-continuable scope, which tracks where `#[const_continue] break` should
579    /// branch to.
580    pub(crate) fn in_const_continuable_scope<F>(
581        &mut self,
582        arms: Box<[ArmId]>,
583        built_match_tree: BuiltMatchTree<'tcx>,
584        state_place: Place<'tcx>,
585        span: Span,
586        f: F,
587    ) -> BlockAnd<()>
588    where
589        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
590    {
591        let region_scope = self.scopes.topmost();
592        let scope = ConstContinuableScope {
593            region_scope,
594            state_place,
595            const_continue_drops: DropTree::new(),
596            arms,
597            built_match_tree,
598        };
599        self.scopes.const_continuable_scopes.push(scope);
600        let normal_exit_block = f(self);
601        let const_continue_scope = self.scopes.const_continuable_scopes.pop().unwrap();
602        assert!(const_continue_scope.region_scope == region_scope);
603
604        let break_block = self.build_exit_tree(
605            const_continue_scope.const_continue_drops,
606            region_scope,
607            span,
608            None,
609        );
610
611        match (normal_exit_block, break_block) {
612            (block, None) => block,
613            (normal_block, Some(exit_block)) => {
614                let target = self.cfg.start_new_block();
615                let source_info = self.source_info(span);
616                self.cfg.terminate(
617                    normal_block.into_block(),
618                    source_info,
619                    TerminatorKind::Goto { target },
620                );
621                self.cfg.terminate(
622                    exit_block.into_block(),
623                    source_info,
624                    TerminatorKind::Goto { target },
625                );
626                target.unit()
627            }
628        }
629    }
630
631    /// Start an if-then scope which tracks drop for `if` expressions and `if`
632    /// guards.
633    ///
634    /// For an if-let chain:
635    ///
636    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
637    ///
638    /// There are three possible ways the condition can be false and we may have
639    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
640    /// To handle this correctly we use a `DropTree` in a similar way to a
641    /// `loop` expression and 'break' out on all of the 'else' paths.
642    ///
643    /// Notes:
644    /// - We don't need to keep a stack of scopes in the `Builder` because the
645    ///   'else' paths will only leave the innermost scope.
646    /// - This is also used for match guards.
647    pub(crate) fn in_if_then_scope<F>(
648        &mut self,
649        region_scope: region::Scope,
650        span: Span,
651        f: F,
652    ) -> (BasicBlock, BasicBlock)
653    where
654        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
655    {
656        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
657        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
658
659        let then_block = f(self).into_block();
660
661        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
662        assert!(if_then_scope.region_scope == region_scope);
663
664        let else_block =
665            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
666                || self.cfg.start_new_block(),
667                |else_block_and| else_block_and.into_block(),
668            );
669
670        (then_block, else_block)
671    }
672
673    /// Convenience wrapper that pushes a scope and then executes `f`
674    /// to build its contents, popping the scope afterwards.
675    #[instrument(skip(self, f), level = "debug")]
676    pub(crate) fn in_scope<F, R>(
677        &mut self,
678        region_scope: (region::Scope, SourceInfo),
679        lint_level: LintLevel,
680        f: F,
681    ) -> BlockAnd<R>
682    where
683        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
684    {
685        let source_scope = self.source_scope;
686        if let LintLevel::Explicit(current_hir_id) = lint_level {
687            let parent_id =
688                self.source_scopes[source_scope].local_data.as_ref().unwrap_crate_local().lint_root;
689            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
690        }
691        self.push_scope(region_scope);
692        let mut block;
693        let rv = unpack!(block = f(self));
694        block = self.pop_scope(region_scope, block).into_block();
695        self.source_scope = source_scope;
696        debug!(?block);
697        block.and(rv)
698    }
699
700    /// Convenience wrapper that executes `f` either within the current scope or a new scope.
701    /// Used for pattern matching, which introduces an additional scope for patterns with guards.
702    pub(crate) fn opt_in_scope<R>(
703        &mut self,
704        opt_region_scope: Option<(region::Scope, SourceInfo)>,
705        f: impl FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
706    ) -> BlockAnd<R> {
707        if let Some(region_scope) = opt_region_scope {
708            self.in_scope(region_scope, LintLevel::Inherited, f)
709        } else {
710            f(self)
711        }
712    }
713
714    /// Push a scope onto the stack. You can then build code in this
715    /// scope and call `pop_scope` afterwards. Note that these two
716    /// calls must be paired; using `in_scope` as a convenience
717    /// wrapper maybe preferable.
718    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
719        self.scopes.push_scope(region_scope, self.source_scope);
720    }
721
722    /// Pops a scope, which should have region scope `region_scope`,
723    /// adding any drops onto the end of `block` that are needed.
724    /// This must match 1-to-1 with `push_scope`.
725    pub(crate) fn pop_scope(
726        &mut self,
727        region_scope: (region::Scope, SourceInfo),
728        mut block: BasicBlock,
729    ) -> BlockAnd<()> {
730        debug!("pop_scope({:?}, {:?})", region_scope, block);
731
732        block = self.leave_top_scope(block);
733
734        self.scopes.pop_scope(region_scope);
735
736        block.unit()
737    }
738
739    /// Sets up the drops for breaking from `block` to `target`.
740    pub(crate) fn break_scope(
741        &mut self,
742        mut block: BasicBlock,
743        value: Option<ExprId>,
744        target: BreakableTarget,
745        source_info: SourceInfo,
746    ) -> BlockAnd<()> {
747        let span = source_info.span;
748
749        let get_scope_index = |scope: region::Scope| {
750            // find the loop-scope by its `region::Scope`.
751            self.scopes
752                .breakable_scopes
753                .iter()
754                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
755                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
756        };
757        let (break_index, destination) = match target {
758            BreakableTarget::Return => {
759                let scope = &self.scopes.breakable_scopes[0];
760                if scope.break_destination != Place::return_place() {
761                    span_bug!(span, "`return` in item with no return scope");
762                }
763                (0, Some(scope.break_destination))
764            }
765            BreakableTarget::Break(scope) => {
766                let break_index = get_scope_index(scope);
767                let scope = &self.scopes.breakable_scopes[break_index];
768                (break_index, Some(scope.break_destination))
769            }
770            BreakableTarget::Continue(scope) => {
771                let break_index = get_scope_index(scope);
772                (break_index, None)
773            }
774        };
775
776        match (destination, value) {
777            (Some(destination), Some(value)) => {
778                debug!("stmt_expr Break val block_context.push(SubExpr)");
779                self.block_context.push(BlockFrame::SubExpr);
780                block = self.expr_into_dest(destination, block, value).into_block();
781                self.block_context.pop();
782            }
783            (Some(destination), None) => {
784                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
785            }
786            (None, Some(_)) => {
787                panic!("`return`, `become` and `break` with value and must have a destination")
788            }
789            (None, None) => {
790                if self.tcx.sess.instrument_coverage() {
791                    // Normally we wouldn't build any MIR in this case, but that makes it
792                    // harder for coverage instrumentation to extract a relevant span for
793                    // `continue` expressions. So here we inject a dummy statement with the
794                    // desired span.
795                    self.cfg.push_coverage_span_marker(block, source_info);
796                }
797            }
798        }
799
800        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
801        let scope_index = self.scopes.scope_index(region_scope, span);
802        let drops = if destination.is_some() {
803            &mut self.scopes.breakable_scopes[break_index].break_drops
804        } else {
805            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
806            else {
807                self.tcx.dcx().span_delayed_bug(
808                    source_info.span,
809                    "unlabelled `continue` within labelled block",
810                );
811                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
812
813                return self.cfg.start_new_block().unit();
814            };
815            drops
816        };
817
818        let mut drop_idx = ROOT_NODE;
819        for scope in &self.scopes.scopes[scope_index + 1..] {
820            for drop in &scope.drops {
821                drop_idx = drops.add_drop(*drop, drop_idx);
822            }
823        }
824        drops.add_entry_point(block, drop_idx);
825
826        // `build_drop_trees` doesn't have access to our source_info, so we
827        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
828        // because MIR type checking will panic if it hasn't been overwritten.
829        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
830        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
831
832        self.cfg.start_new_block().unit()
833    }
834
835    /// Based on `FunctionCx::eval_unevaluated_mir_constant_to_valtree`.
836    fn eval_unevaluated_mir_constant_to_valtree(
837        &self,
838        constant: ConstOperand<'tcx>,
839    ) -> Result<(ty::ValTree<'tcx>, Ty<'tcx>), interpret::ErrorHandled> {
840        assert!(!constant.const_.ty().has_param());
841        let (uv, ty) = match constant.const_ {
842            mir::Const::Unevaluated(uv, ty) => (uv.shrink(), ty),
843            mir::Const::Ty(_, c) => match c.kind() {
844                // A constant that came from a const generic but was then used as an argument to
845                // old-style simd_shuffle (passing as argument instead of as a generic param).
846                ty::ConstKind::Value(cv) => return Ok((cv.valtree, cv.ty)),
847                other => span_bug!(constant.span, "{other:#?}"),
848            },
849            mir::Const::Val(mir::ConstValue::Scalar(mir::interpret::Scalar::Int(val)), ty) => {
850                return Ok((ValTree::from_scalar_int(self.tcx, val), ty));
851            }
852            // We should never encounter `Const::Val` unless MIR opts (like const prop) evaluate
853            // a constant and write that value back into `Operand`s. This could happen, but is
854            // unlikely. Also: all users of `simd_shuffle` are on unstable and already need to take
855            // a lot of care around intrinsics. For an issue to happen here, it would require a
856            // macro expanding to a `simd_shuffle` call without wrapping the constant argument in a
857            // `const {}` block, but the user pass through arbitrary expressions.
858
859            // FIXME(oli-obk): Replace the magic const generic argument of `simd_shuffle` with a
860            // real const generic, and get rid of this entire function.
861            other => span_bug!(constant.span, "{other:#?}"),
862        };
863
864        match self.tcx.const_eval_resolve_for_typeck(self.typing_env(), uv, constant.span) {
865            Ok(Ok(valtree)) => Ok((valtree, ty)),
866            Ok(Err(ty)) => span_bug!(constant.span, "could not convert {ty:?} to a valtree"),
867            Err(e) => Err(e),
868        }
869    }
870
871    /// Sets up the drops for jumping from `block` to `scope`.
872    pub(crate) fn break_const_continuable_scope(
873        &mut self,
874        mut block: BasicBlock,
875        value: ExprId,
876        scope: region::Scope,
877        source_info: SourceInfo,
878    ) -> BlockAnd<()> {
879        let span = source_info.span;
880
881        // A break can only break out of a scope, so the value should be a scope.
882        let rustc_middle::thir::ExprKind::Scope { value, .. } = self.thir[value].kind else {
883            span_bug!(span, "break value must be a scope")
884        };
885
886        let expr = &self.thir[value];
887        let constant = match &expr.kind {
888            ExprKind::Adt(box AdtExpr { variant_index, fields, base, .. }) => {
889                assert!(matches!(base, AdtExprBase::None));
890                assert!(fields.is_empty());
891                ConstOperand {
892                    span: self.thir[value].span,
893                    user_ty: None,
894                    const_: Const::Ty(
895                        self.thir[value].ty,
896                        ty::Const::new_value(
897                            self.tcx,
898                            ValTree::from_branches(
899                                self.tcx,
900                                [ty::Const::new_value(
901                                    self.tcx,
902                                    ValTree::from_scalar_int(
903                                        self.tcx,
904                                        variant_index.as_u32().into(),
905                                    ),
906                                    self.tcx.types.u32,
907                                )],
908                            ),
909                            self.thir[value].ty,
910                        ),
911                    ),
912                }
913            }
914
915            ExprKind::Literal { .. }
916            | ExprKind::NonHirLiteral { .. }
917            | ExprKind::ZstLiteral { .. }
918            | ExprKind::NamedConst { .. } => self.as_constant(&self.thir[value]),
919
920            other => {
921                use crate::errors::ConstContinueNotMonomorphicConstReason as Reason;
922
923                let span = expr.span;
924                let reason = match other {
925                    ExprKind::ConstParam { .. } => Reason::ConstantParameter { span },
926                    ExprKind::ConstBlock { .. } => Reason::ConstBlock { span },
927                    _ => Reason::Other { span },
928                };
929
930                self.tcx
931                    .dcx()
932                    .emit_err(ConstContinueNotMonomorphicConst { span: expr.span, reason });
933                return block.unit();
934            }
935        };
936
937        let break_index = self
938            .scopes
939            .const_continuable_scopes
940            .iter()
941            .rposition(|const_continuable_scope| const_continuable_scope.region_scope == scope)
942            .unwrap_or_else(|| span_bug!(span, "no enclosing const-continuable scope found"));
943
944        let scope = &self.scopes.const_continuable_scopes[break_index];
945
946        let state_decl = &self.local_decls[scope.state_place.as_local().unwrap()];
947        let state_ty = state_decl.ty;
948        let (discriminant_ty, rvalue) = match state_ty.kind() {
949            ty::Adt(adt_def, _) if adt_def.is_enum() => {
950                (state_ty.discriminant_ty(self.tcx), Rvalue::Discriminant(scope.state_place))
951            }
952            ty::Uint(_) | ty::Int(_) | ty::Float(_) | ty::Bool | ty::Char => {
953                (state_ty, Rvalue::Use(Operand::Copy(scope.state_place)))
954            }
955            _ => span_bug!(state_decl.source_info.span, "unsupported #[loop_match] state"),
956        };
957
958        // The `PatCtxt` is normally used in pattern exhaustiveness checking, but reused
959        // here because it performs normalization and const evaluation.
960        let dropless_arena = rustc_arena::DroplessArena::default();
961        let typeck_results = self.tcx.typeck(self.def_id);
962        let cx = RustcPatCtxt {
963            tcx: self.tcx,
964            typeck_results,
965            module: self.tcx.parent_module(self.hir_id).to_def_id(),
966            // FIXME(#132279): We're in a body, should handle opaques.
967            typing_env: rustc_middle::ty::TypingEnv::non_body_analysis(self.tcx, self.def_id),
968            dropless_arena: &dropless_arena,
969            match_lint_level: self.hir_id,
970            whole_match_span: Some(rustc_span::Span::default()),
971            scrut_span: rustc_span::Span::default(),
972            refutable: true,
973            known_valid_scrutinee: true,
974            internal_state: Default::default(),
975        };
976
977        let valtree = match self.eval_unevaluated_mir_constant_to_valtree(constant) {
978            Ok((valtree, ty)) => {
979                // Defensively check that the type is monomorphic.
980                assert!(!ty.has_param());
981
982                valtree
983            }
984            Err(ErrorHandled::Reported(..)) => {
985                return block.unit();
986            }
987            Err(ErrorHandled::TooGeneric(_)) => {
988                self.tcx.dcx().emit_fatal(ConstContinueBadConst { span: constant.span });
989            }
990        };
991
992        let Some(real_target) =
993            self.static_pattern_match(&cx, valtree, &*scope.arms, &scope.built_match_tree)
994        else {
995            self.tcx.dcx().emit_fatal(ConstContinueUnknownJumpTarget { span })
996        };
997
998        self.block_context.push(BlockFrame::SubExpr);
999        let state_place = scope.state_place;
1000        block = self.expr_into_dest(state_place, block, value).into_block();
1001        self.block_context.pop();
1002
1003        let discr = self.temp(discriminant_ty, source_info.span);
1004        let scope_index = self
1005            .scopes
1006            .scope_index(self.scopes.const_continuable_scopes[break_index].region_scope, span);
1007        let scope = &mut self.scopes.const_continuable_scopes[break_index];
1008        self.cfg.push_assign(block, source_info, discr, rvalue);
1009        let drop_and_continue_block = self.cfg.start_new_block();
1010        let imaginary_target = self.cfg.start_new_block();
1011        self.cfg.terminate(
1012            block,
1013            source_info,
1014            TerminatorKind::FalseEdge { real_target: drop_and_continue_block, imaginary_target },
1015        );
1016
1017        let drops = &mut scope.const_continue_drops;
1018
1019        let drop_idx = self.scopes.scopes[scope_index + 1..]
1020            .iter()
1021            .flat_map(|scope| &scope.drops)
1022            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
1023
1024        drops.add_entry_point(imaginary_target, drop_idx);
1025
1026        self.cfg.terminate(imaginary_target, source_info, TerminatorKind::UnwindResume);
1027
1028        let region_scope = scope.region_scope;
1029        let scope_index = self.scopes.scope_index(region_scope, span);
1030        let mut drops = DropTree::new();
1031
1032        let drop_idx = self.scopes.scopes[scope_index + 1..]
1033            .iter()
1034            .flat_map(|scope| &scope.drops)
1035            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
1036
1037        drops.add_entry_point(drop_and_continue_block, drop_idx);
1038
1039        // `build_drop_trees` doesn't have access to our source_info, so we
1040        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
1041        // because MIR type checking will panic if it hasn't been overwritten.
1042        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
1043        self.cfg.terminate(drop_and_continue_block, source_info, TerminatorKind::UnwindResume);
1044
1045        self.build_exit_tree(drops, region_scope, span, Some(real_target));
1046
1047        return self.cfg.start_new_block().unit();
1048    }
1049
1050    /// Sets up the drops for breaking from `block` due to an `if` condition
1051    /// that turned out to be false.
1052    ///
1053    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
1054    /// there is an if-then scope to tell us what the target scope is.
1055    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
1056        let if_then_scope = self
1057            .scopes
1058            .if_then_scope
1059            .as_ref()
1060            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
1061
1062        let target = if_then_scope.region_scope;
1063        let scope_index = self.scopes.scope_index(target, source_info.span);
1064
1065        // Upgrade `if_then_scope` to `&mut`.
1066        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
1067
1068        let mut drop_idx = ROOT_NODE;
1069        let drops = &mut if_then_scope.else_drops;
1070        for scope in &self.scopes.scopes[scope_index + 1..] {
1071            for drop in &scope.drops {
1072                drop_idx = drops.add_drop(*drop, drop_idx);
1073            }
1074        }
1075        drops.add_entry_point(block, drop_idx);
1076
1077        // `build_drop_trees` doesn't have access to our source_info, so we
1078        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
1079        // because MIR type checking will panic if it hasn't been overwritten.
1080        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
1081        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
1082    }
1083
1084    /// Sets up the drops for explicit tail calls.
1085    ///
1086    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
1087    /// Instead, all scheduled drops are immediately added to the CFG.
1088    pub(crate) fn break_for_tail_call(
1089        &mut self,
1090        mut block: BasicBlock,
1091        args: &[Spanned<Operand<'tcx>>],
1092        source_info: SourceInfo,
1093    ) -> BlockAnd<()> {
1094        let arg_drops: Vec<_> = args
1095            .iter()
1096            .rev()
1097            .filter_map(|arg| match &arg.node {
1098                Operand::Copy(_) => bug!("copy op in tail call args"),
1099                Operand::Move(place) => {
1100                    let local =
1101                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
1102
1103                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1104                        return None;
1105                    }
1106
1107                    Some(DropData { source_info, local, kind: DropKind::Value })
1108                }
1109                Operand::Constant(_) | Operand::RuntimeChecks(_) => None,
1110            })
1111            .collect();
1112
1113        let mut unwind_to = self.diverge_cleanup_target(
1114            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
1115            DUMMY_SP,
1116        );
1117        let typing_env = self.typing_env();
1118        let unwind_drops = &mut self.scopes.unwind_drops;
1119
1120        // the innermost scope contains only the destructors for the tail call arguments
1121        // we only want to drop these in case of a panic, so we skip it
1122        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
1123            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
1124            for drop_data in scope.drops.iter().rev() {
1125                let source_info = drop_data.source_info;
1126                let local = drop_data.local;
1127
1128                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
1129                    continue;
1130                }
1131
1132                match drop_data.kind {
1133                    DropKind::Value => {
1134                        // `unwind_to` should drop the value that we're about to
1135                        // schedule. If dropping this value panics, then we continue
1136                        // with the *next* value on the unwind path.
1137                        debug_assert_eq!(
1138                            unwind_drops.drop_nodes[unwind_to].data.local,
1139                            drop_data.local
1140                        );
1141                        debug_assert_eq!(
1142                            unwind_drops.drop_nodes[unwind_to].data.kind,
1143                            drop_data.kind
1144                        );
1145                        unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1146
1147                        let mut unwind_entry_point = unwind_to;
1148
1149                        // the tail call arguments must be dropped if any of these drops panic
1150                        for drop in arg_drops.iter().copied() {
1151                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
1152                        }
1153
1154                        unwind_drops.add_entry_point(block, unwind_entry_point);
1155
1156                        let next = self.cfg.start_new_block();
1157                        self.cfg.terminate(
1158                            block,
1159                            source_info,
1160                            TerminatorKind::Drop {
1161                                place: local.into(),
1162                                target: next,
1163                                unwind: UnwindAction::Continue,
1164                                replace: false,
1165                                drop: None,
1166                                async_fut: None,
1167                            },
1168                        );
1169                        block = next;
1170                    }
1171                    DropKind::ForLint => {
1172                        self.cfg.push(
1173                            block,
1174                            Statement::new(
1175                                source_info,
1176                                StatementKind::BackwardIncompatibleDropHint {
1177                                    place: Box::new(local.into()),
1178                                    reason: BackwardIncompatibleDropReason::Edition2024,
1179                                },
1180                            ),
1181                        );
1182                    }
1183                    DropKind::Storage => {
1184                        // Only temps and vars need their storage dead.
1185                        assert!(local.index() > self.arg_count);
1186                        self.cfg.push(
1187                            block,
1188                            Statement::new(source_info, StatementKind::StorageDead(local)),
1189                        );
1190                    }
1191                }
1192            }
1193        }
1194
1195        block.unit()
1196    }
1197
1198    fn is_async_drop_impl(
1199        tcx: TyCtxt<'tcx>,
1200        local_decls: &IndexVec<Local, LocalDecl<'tcx>>,
1201        typing_env: ty::TypingEnv<'tcx>,
1202        local: Local,
1203    ) -> bool {
1204        let ty = local_decls[local].ty;
1205        if ty.is_async_drop(tcx, typing_env) || ty.is_coroutine() {
1206            return true;
1207        }
1208        ty.needs_async_drop(tcx, typing_env)
1209    }
1210    fn is_async_drop(&self, local: Local) -> bool {
1211        Self::is_async_drop_impl(self.tcx, &self.local_decls, self.typing_env(), local)
1212    }
1213
1214    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
1215        // If we are emitting a `drop` statement, we need to have the cached
1216        // diverge cleanup pads ready in case that drop panics.
1217        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
1218        let is_coroutine = self.coroutine.is_some();
1219        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
1220
1221        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1222        let has_async_drops = is_coroutine
1223            && scope.drops.iter().any(|v| v.kind == DropKind::Value && self.is_async_drop(v.local));
1224        let dropline_to = if has_async_drops { Some(self.diverge_dropline()) } else { None };
1225        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1226        let typing_env = self.typing_env();
1227        build_scope_drops(
1228            &mut self.cfg,
1229            &mut self.scopes.unwind_drops,
1230            &mut self.scopes.coroutine_drops,
1231            scope,
1232            block,
1233            unwind_to,
1234            dropline_to,
1235            is_coroutine && needs_cleanup,
1236            self.arg_count,
1237            |v: Local| Self::is_async_drop_impl(self.tcx, &self.local_decls, typing_env, v),
1238        )
1239        .into_block()
1240    }
1241
1242    /// Possibly creates a new source scope if `current_root` and `parent_root`
1243    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
1244    pub(crate) fn maybe_new_source_scope(
1245        &mut self,
1246        span: Span,
1247        current_id: HirId,
1248        parent_id: HirId,
1249    ) {
1250        let (current_root, parent_root) =
1251            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
1252                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
1253                // the only part of rustc that tracks MIR -> HIR is the
1254                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
1255                // locations. Normally the number of source scopes is limited to the set of nodes
1256                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
1257                // behavior to maximize the number of source scopes, increasing the granularity of
1258                // the MIR->HIR mapping.
1259                (current_id, parent_id)
1260            } else {
1261                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
1262                // parents. We estimate the true lint roots here to avoid creating a lot of source
1263                // scopes.
1264                (
1265                    self.maybe_lint_level_root_bounded(current_id),
1266                    if parent_id == self.hir_id {
1267                        parent_id // this is very common
1268                    } else {
1269                        self.maybe_lint_level_root_bounded(parent_id)
1270                    },
1271                )
1272            };
1273
1274        if current_root != parent_root {
1275            let lint_level = LintLevel::Explicit(current_root);
1276            self.source_scope = self.new_source_scope(span, lint_level);
1277        }
1278    }
1279
1280    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
1281    /// It stops at `self.hir_id` and just returns it if reached.
1282    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
1283        // This assertion lets us just store `ItemLocalId` in the cache, rather
1284        // than the full `HirId`.
1285        assert_eq!(orig_id.owner, self.hir_id.owner);
1286
1287        let mut id = orig_id;
1288        loop {
1289            if id == self.hir_id {
1290                // This is a moderately common case, mostly hit for previously unseen nodes.
1291                break;
1292            }
1293
1294            if self.tcx.hir_attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
1295                // This is a rare case. It's for a node path that doesn't reach the root due to an
1296                // intervening lint level attribute. This result doesn't get cached.
1297                return id;
1298            }
1299
1300            let next = self.tcx.parent_hir_id(id);
1301            if next == id {
1302                bug!("lint traversal reached the root of the crate");
1303            }
1304            id = next;
1305
1306            // This lookup is just an optimization; it can be removed without affecting
1307            // functionality. It might seem strange to see this at the end of this loop, but the
1308            // `orig_id` passed in to this function is almost always previously unseen, for which a
1309            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
1310            // cache lookups have a very high hit rate.
1311            if self.lint_level_roots_cache.contains(id.local_id) {
1312                break;
1313            }
1314        }
1315
1316        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
1317        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
1318        // The performance wins from the cache all come from caching non-leaf nodes.
1319        self.lint_level_roots_cache.insert(orig_id.local_id);
1320        self.hir_id
1321    }
1322
1323    /// Creates a new source scope, nested in the current one.
1324    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
1325        let parent = self.source_scope;
1326        debug!(
1327            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
1328            span,
1329            lint_level,
1330            parent,
1331            self.source_scopes.get(parent)
1332        );
1333        let scope_local_data = SourceScopeLocalData {
1334            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
1335                lint_root
1336            } else {
1337                self.source_scopes[parent].local_data.as_ref().unwrap_crate_local().lint_root
1338            },
1339        };
1340        self.source_scopes.push(SourceScopeData {
1341            span,
1342            parent_scope: Some(parent),
1343            inlined: None,
1344            inlined_parent_scope: None,
1345            local_data: ClearCrossCrate::Set(scope_local_data),
1346        })
1347    }
1348
1349    /// Given a span and the current source scope, make a SourceInfo.
1350    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1351        SourceInfo { span, scope: self.source_scope }
1352    }
1353
1354    // Finding scopes
1355    // ==============
1356
1357    /// Returns the scope that we should use as the lifetime of an
1358    /// operand. Basically, an operand must live until it is consumed.
1359    /// This is similar to, but not quite the same as, the temporary
1360    /// scope (which can be larger or smaller).
1361    ///
1362    /// Consider:
1363    /// ```ignore (illustrative)
1364    /// let x = foo(bar(X, Y));
1365    /// ```
1366    /// We wish to pop the storage for X and Y after `bar()` is
1367    /// called, not after the whole `let` is completed.
1368    ///
1369    /// As another example, if the second argument diverges:
1370    /// ```ignore (illustrative)
1371    /// foo(Box::new(2), panic!())
1372    /// ```
1373    /// We would allocate the box but then free it on the unwinding
1374    /// path; we would also emit a free on the 'success' path from
1375    /// panic, but that will turn out to be removed as dead-code.
1376    pub(crate) fn local_scope(&self) -> region::Scope {
1377        self.scopes.topmost()
1378    }
1379
1380    // Scheduling drops
1381    // ================
1382
1383    pub(crate) fn schedule_drop_storage_and_value(
1384        &mut self,
1385        span: Span,
1386        region_scope: region::Scope,
1387        local: Local,
1388    ) {
1389        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1390        self.schedule_drop(span, region_scope, local, DropKind::Value);
1391    }
1392
1393    /// Indicates that `place` should be dropped on exit from `region_scope`.
1394    ///
1395    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1396    /// place, or a function parameter.
1397    pub(crate) fn schedule_drop(
1398        &mut self,
1399        span: Span,
1400        region_scope: region::Scope,
1401        local: Local,
1402        drop_kind: DropKind,
1403    ) {
1404        let needs_drop = match drop_kind {
1405            DropKind::Value | DropKind::ForLint => {
1406                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1407                    return;
1408                }
1409                true
1410            }
1411            DropKind::Storage => {
1412                if local.index() <= self.arg_count {
1413                    span_bug!(
1414                        span,
1415                        "`schedule_drop` called with body argument {:?} \
1416                        but its storage does not require a drop",
1417                        local,
1418                    )
1419                }
1420                false
1421            }
1422        };
1423
1424        // When building drops, we try to cache chains of drops to reduce the
1425        // number of `DropTree::add_drop` calls. This, however, means that
1426        // whenever we add a drop into a scope which already had some entries
1427        // in the drop tree built (and thus, cached) for it, we must invalidate
1428        // all caches which might branch into the scope which had a drop just
1429        // added to it. This is necessary, because otherwise some other code
1430        // might use the cache to branch into already built chain of drops,
1431        // essentially ignoring the newly added drop.
1432        //
1433        // For example consider there’s two scopes with a drop in each. These
1434        // are built and thus the caches are filled:
1435        //
1436        // +--------------------------------------------------------+
1437        // | +---------------------------------+                    |
1438        // | | +--------+     +-------------+  |  +---------------+ |
1439        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1440        // | | +--------+     +-------------+  |  +---------------+ |
1441        // | +------------|outer_scope cache|--+                    |
1442        // +------------------------------|middle_scope cache|------+
1443        //
1444        // Now, a new, innermost scope is added along with a new drop into
1445        // both innermost and outermost scopes:
1446        //
1447        // +------------------------------------------------------------+
1448        // | +----------------------------------+                       |
1449        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1450        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1451        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1452        // | |             +-+ +-------------+  |                       |
1453        // | +---|invalid outer_scope cache|----+                       |
1454        // +----=----------------|invalid middle_scope cache|-----------+
1455        //
1456        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1457        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1458        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1459        // wrong results.
1460        //
1461        // Note that this code iterates scopes from the innermost to the outermost,
1462        // invalidating caches of each scope visited. This way bare minimum of the
1463        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1464        // cache of outer scope stays intact.
1465        //
1466        // Since we only cache drops for the unwind path and the coroutine drop
1467        // path, we only need to invalidate the cache for drops that happen on
1468        // the unwind or coroutine drop paths. This means that for
1469        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1470        let invalidate_caches = needs_drop || self.coroutine.is_some();
1471        for scope in self.scopes.scopes.iter_mut().rev() {
1472            if invalidate_caches {
1473                scope.invalidate_cache();
1474            }
1475
1476            if scope.region_scope == region_scope {
1477                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1478                // Attribute scope exit drops to scope's closing brace.
1479                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1480
1481                scope.drops.push(DropData {
1482                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1483                    local,
1484                    kind: drop_kind,
1485                });
1486
1487                return;
1488            }
1489        }
1490
1491        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1492    }
1493
1494    /// Schedule emission of a backwards incompatible drop lint hint.
1495    /// Applicable only to temporary values for now.
1496    #[instrument(level = "debug", skip(self))]
1497    pub(crate) fn schedule_backwards_incompatible_drop(
1498        &mut self,
1499        span: Span,
1500        region_scope: region::Scope,
1501        local: Local,
1502    ) {
1503        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1504        // We need to know all of them so that we can capture potential borrow-checking errors.
1505        for scope in self.scopes.scopes.iter_mut().rev() {
1506            // Since we are inserting linting MIR statement, we have to invalidate the caches
1507            scope.invalidate_cache();
1508            if scope.region_scope == region_scope {
1509                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1510                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1511
1512                scope.drops.push(DropData {
1513                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1514                    local,
1515                    kind: DropKind::ForLint,
1516                });
1517
1518                return;
1519            }
1520        }
1521        span_bug!(
1522            span,
1523            "region scope {:?} not in scope to drop {:?} for linting",
1524            region_scope,
1525            local
1526        );
1527    }
1528
1529    /// Indicates that the "local operand" stored in `local` is
1530    /// *moved* at some point during execution (see `local_scope` for
1531    /// more information about what a "local operand" is -- in short,
1532    /// it's an intermediate operand created as part of preparing some
1533    /// MIR instruction). We use this information to suppress
1534    /// redundant drops on the non-unwind paths. This results in less
1535    /// MIR, but also avoids spurious borrow check errors
1536    /// (c.f. #64391).
1537    ///
1538    /// Example: when compiling the call to `foo` here:
1539    ///
1540    /// ```ignore (illustrative)
1541    /// foo(bar(), ...)
1542    /// ```
1543    ///
1544    /// we would evaluate `bar()` to an operand `_X`. We would also
1545    /// schedule `_X` to be dropped when the expression scope for
1546    /// `foo(bar())` is exited. This is relevant, for example, if the
1547    /// later arguments should unwind (it would ensure that `_X` gets
1548    /// dropped). However, if no unwind occurs, then `_X` will be
1549    /// unconditionally consumed by the `call`:
1550    ///
1551    /// ```ignore (illustrative)
1552    /// bb {
1553    ///   ...
1554    ///   _R = CALL(foo, _X, ...)
1555    /// }
1556    /// ```
1557    ///
1558    /// However, `_X` is still registered to be dropped, and so if we
1559    /// do nothing else, we would generate a `DROP(_X)` that occurs
1560    /// after the call. This will later be optimized out by the
1561    /// drop-elaboration code, but in the meantime it can lead to
1562    /// spurious borrow-check errors -- the problem, ironically, is
1563    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1564    /// that it creates. See #64391 for an example.
1565    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1566        let local_scope = self.local_scope();
1567        let scope = self.scopes.scopes.last_mut().unwrap();
1568
1569        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1570
1571        // look for moves of a local variable, like `MOVE(_X)`
1572        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1573            Operand::Copy(_) | Operand::Constant(_) | Operand::RuntimeChecks(_) => None,
1574            Operand::Move(place) => place.as_local(),
1575        });
1576
1577        for local in locals_moved {
1578            // check if we have a Drop for this operand and -- if so
1579            // -- add it to the list of moved operands. Note that this
1580            // local might not have been an operand created for this
1581            // call, it could come from other places too.
1582            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1583                scope.moved_locals.push(local);
1584            }
1585        }
1586    }
1587
1588    // Other
1589    // =====
1590
1591    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1592    /// this point. The `DropIdx` will be created if it doesn't already exist.
1593    fn diverge_cleanup(&mut self) -> DropIdx {
1594        // It is okay to use dummy span because the getting scope index on the topmost scope
1595        // must always succeed.
1596        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1597    }
1598
1599    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1600    /// some ancestor scope instead of the current scope.
1601    /// It is possible to unwind to some ancestor scope if some drop panics as
1602    /// the program breaks out of a if-then scope.
1603    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1604        let target = self.scopes.scope_index(target_scope, span);
1605        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1606            .iter()
1607            .enumerate()
1608            .rev()
1609            .find_map(|(scope_idx, scope)| {
1610                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1611            })
1612            .unwrap_or((0, ROOT_NODE));
1613
1614        if uncached_scope > target {
1615            return cached_drop;
1616        }
1617
1618        let is_coroutine = self.coroutine.is_some();
1619        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1620            for drop in &scope.drops {
1621                if is_coroutine || drop.kind == DropKind::Value {
1622                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1623                }
1624            }
1625            scope.cached_unwind_block = Some(cached_drop);
1626        }
1627
1628        cached_drop
1629    }
1630
1631    /// Prepares to create a path that performs all required cleanup for a
1632    /// terminator that can unwind at the given basic block.
1633    ///
1634    /// This path terminates in Resume. The path isn't created until after all
1635    /// of the non-unwind paths in this item have been lowered.
1636    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1637        debug_assert!(
1638            matches!(
1639                self.cfg.block_data(start).terminator().kind,
1640                TerminatorKind::Assert { .. }
1641                    | TerminatorKind::Call { .. }
1642                    | TerminatorKind::Drop { .. }
1643                    | TerminatorKind::FalseUnwind { .. }
1644                    | TerminatorKind::InlineAsm { .. }
1645            ),
1646            "diverge_from called on block with terminator that cannot unwind."
1647        );
1648
1649        let next_drop = self.diverge_cleanup();
1650        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1651    }
1652
1653    /// Returns the [DropIdx] for the innermost drop for dropline (coroutine drop path).
1654    /// The `DropIdx` will be created if it doesn't already exist.
1655    fn diverge_dropline(&mut self) -> DropIdx {
1656        // It is okay to use dummy span because the getting scope index on the topmost scope
1657        // must always succeed.
1658        self.diverge_dropline_target(self.scopes.topmost(), DUMMY_SP)
1659    }
1660
1661    /// Similar to diverge_cleanup_target, but for dropline (coroutine drop path)
1662    fn diverge_dropline_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1663        debug_assert!(
1664            self.coroutine.is_some(),
1665            "diverge_dropline_target is valid only for coroutine"
1666        );
1667        let target = self.scopes.scope_index(target_scope, span);
1668        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1669            .iter()
1670            .enumerate()
1671            .rev()
1672            .find_map(|(scope_idx, scope)| {
1673                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1674            })
1675            .unwrap_or((0, ROOT_NODE));
1676
1677        if uncached_scope > target {
1678            return cached_drop;
1679        }
1680
1681        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1682            for drop in &scope.drops {
1683                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1684            }
1685            scope.cached_coroutine_drop_block = Some(cached_drop);
1686        }
1687
1688        cached_drop
1689    }
1690
1691    /// Sets up a path that performs all required cleanup for dropping a
1692    /// coroutine, starting from the given block that ends in
1693    /// [TerminatorKind::Yield].
1694    ///
1695    /// This path terminates in CoroutineDrop.
1696    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1697        debug_assert!(
1698            matches!(
1699                self.cfg.block_data(yield_block).terminator().kind,
1700                TerminatorKind::Yield { .. }
1701            ),
1702            "coroutine_drop_cleanup called on block with non-yield terminator."
1703        );
1704        let cached_drop = self.diverge_dropline();
1705        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1706    }
1707
1708    /// Utility function for *non*-scope code to build their own drops
1709    /// Force a drop at this point in the MIR by creating a new block.
1710    pub(crate) fn build_drop_and_replace(
1711        &mut self,
1712        block: BasicBlock,
1713        span: Span,
1714        place: Place<'tcx>,
1715        value: Rvalue<'tcx>,
1716    ) -> BlockAnd<()> {
1717        let source_info = self.source_info(span);
1718
1719        // create the new block for the assignment
1720        let assign = self.cfg.start_new_block();
1721        self.cfg.push_assign(assign, source_info, place, value.clone());
1722
1723        // create the new block for the assignment in the case of unwinding
1724        let assign_unwind = self.cfg.start_new_cleanup_block();
1725        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1726
1727        self.cfg.terminate(
1728            block,
1729            source_info,
1730            TerminatorKind::Drop {
1731                place,
1732                target: assign,
1733                unwind: UnwindAction::Cleanup(assign_unwind),
1734                replace: true,
1735                drop: None,
1736                async_fut: None,
1737            },
1738        );
1739        self.diverge_from(block);
1740
1741        assign.unit()
1742    }
1743
1744    /// Creates an `Assert` terminator and return the success block.
1745    /// If the boolean condition operand is not the expected value,
1746    /// a runtime panic will be caused with the given message.
1747    pub(crate) fn assert(
1748        &mut self,
1749        block: BasicBlock,
1750        cond: Operand<'tcx>,
1751        expected: bool,
1752        msg: AssertMessage<'tcx>,
1753        span: Span,
1754    ) -> BasicBlock {
1755        let source_info = self.source_info(span);
1756        let success_block = self.cfg.start_new_block();
1757
1758        self.cfg.terminate(
1759            block,
1760            source_info,
1761            TerminatorKind::Assert {
1762                cond,
1763                expected,
1764                msg: Box::new(msg),
1765                target: success_block,
1766                unwind: UnwindAction::Continue,
1767            },
1768        );
1769        self.diverge_from(block);
1770
1771        success_block
1772    }
1773
1774    /// Unschedules any drops in the top two scopes.
1775    ///
1776    /// This is only needed for pattern-matches combining guards and or-patterns: or-patterns lead
1777    /// to guards being lowered multiple times before lowering the arm body, so we unschedle drops
1778    /// for guards' temporaries and bindings between lowering each instance of an match arm's guard.
1779    pub(crate) fn clear_match_arm_and_guard_scopes(&mut self, region_scope: region::Scope) {
1780        let [.., arm_scope, guard_scope] = &mut *self.scopes.scopes else {
1781            bug!("matches with guards should introduce separate scopes for the pattern and guard");
1782        };
1783
1784        assert_eq!(arm_scope.region_scope, region_scope);
1785        assert_eq!(guard_scope.region_scope.data, region::ScopeData::MatchGuard);
1786        assert_eq!(guard_scope.region_scope.local_id, region_scope.local_id);
1787
1788        arm_scope.drops.clear();
1789        arm_scope.invalidate_cache();
1790        guard_scope.drops.clear();
1791        guard_scope.invalidate_cache();
1792    }
1793}
1794
1795/// Builds drops for `pop_scope` and `leave_top_scope`.
1796///
1797/// # Parameters
1798///
1799/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1800/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1801/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1802/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1803///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1804///   instructions on unwinding)
1805/// * `dropline_to`, describes the drops that would occur at this point in the code if a
1806///    coroutine drop occurred.
1807/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1808/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1809fn build_scope_drops<'tcx, F>(
1810    cfg: &mut CFG<'tcx>,
1811    unwind_drops: &mut DropTree,
1812    coroutine_drops: &mut DropTree,
1813    scope: &Scope,
1814    block: BasicBlock,
1815    unwind_to: DropIdx,
1816    dropline_to: Option<DropIdx>,
1817    storage_dead_on_unwind: bool,
1818    arg_count: usize,
1819    is_async_drop: F,
1820) -> BlockAnd<()>
1821where
1822    F: Fn(Local) -> bool,
1823{
1824    debug!("build_scope_drops({:?} -> {:?}), dropline_to={:?}", block, scope, dropline_to);
1825
1826    // Build up the drops in evaluation order. The end result will
1827    // look like:
1828    //
1829    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1830    //               |                    |                 |
1831    //               :                    |                 |
1832    //                                    V                 V
1833    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1834    //
1835    // The horizontal arrows represent the execution path when the drops return
1836    // successfully. The downwards arrows represent the execution path when the
1837    // drops panic (panicking while unwinding will abort, so there's no need for
1838    // another set of arrows).
1839    //
1840    // For coroutines, we unwind from a drop on a local to its StorageDead
1841    // statement. For other functions we don't worry about StorageDead. The
1842    // drops for the unwind path should have already been generated by
1843    // `diverge_cleanup_gen`.
1844
1845    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1846    // This is a subset of what needs to be dropped when exiting the scope.
1847    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1848    // so that we can use it should a destructor panic.
1849    let mut unwind_to = unwind_to;
1850
1851    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1852    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1853    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1854    // will branch to `drops[n]`.
1855    let mut block = block;
1856
1857    // `dropline_to` indicates what needs to be dropped should coroutine drop occur.
1858    let mut dropline_to = dropline_to;
1859
1860    for drop_data in scope.drops.iter().rev() {
1861        let source_info = drop_data.source_info;
1862        let local = drop_data.local;
1863
1864        match drop_data.kind {
1865            DropKind::Value => {
1866                // `unwind_to` should drop the value that we're about to
1867                // schedule. If dropping this value panics, then we continue
1868                // with the *next* value on the unwind path.
1869                //
1870                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1871                // because `drops[n]` should unwind to `drops[n-1]`.
1872                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.local, drop_data.local);
1873                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1874                unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1875
1876                if let Some(idx) = dropline_to {
1877                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1878                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1879                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1880                }
1881
1882                // If the operand has been moved, and we are not on an unwind
1883                // path, then don't generate the drop. (We only take this into
1884                // account for non-unwind paths so as not to disturb the
1885                // caching mechanism.)
1886                if scope.moved_locals.contains(&local) {
1887                    continue;
1888                }
1889
1890                unwind_drops.add_entry_point(block, unwind_to);
1891                if let Some(to) = dropline_to
1892                    && is_async_drop(local)
1893                {
1894                    coroutine_drops.add_entry_point(block, to);
1895                }
1896
1897                let next = cfg.start_new_block();
1898                cfg.terminate(
1899                    block,
1900                    source_info,
1901                    TerminatorKind::Drop {
1902                        place: local.into(),
1903                        target: next,
1904                        unwind: UnwindAction::Continue,
1905                        replace: false,
1906                        drop: None,
1907                        async_fut: None,
1908                    },
1909                );
1910                block = next;
1911            }
1912            DropKind::ForLint => {
1913                // As in the `DropKind::Storage` case below:
1914                // normally lint-related drops are not emitted for unwind,
1915                // so we can just leave `unwind_to` unmodified, but in some
1916                // cases we emit things ALSO on the unwind path, so we need to adjust
1917                // `unwind_to` in that case.
1918                if storage_dead_on_unwind {
1919                    debug_assert_eq!(
1920                        unwind_drops.drop_nodes[unwind_to].data.local,
1921                        drop_data.local
1922                    );
1923                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1924                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1925                }
1926
1927                // If the operand has been moved, and we are not on an unwind
1928                // path, then don't generate the drop. (We only take this into
1929                // account for non-unwind paths so as not to disturb the
1930                // caching mechanism.)
1931                if scope.moved_locals.contains(&local) {
1932                    continue;
1933                }
1934
1935                cfg.push(
1936                    block,
1937                    Statement::new(
1938                        source_info,
1939                        StatementKind::BackwardIncompatibleDropHint {
1940                            place: Box::new(local.into()),
1941                            reason: BackwardIncompatibleDropReason::Edition2024,
1942                        },
1943                    ),
1944                );
1945            }
1946            DropKind::Storage => {
1947                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1948                // need to adjust `unwind_to` on this path. However, in some specific cases
1949                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1950                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1951                // so that any future drops we emit will not register storage-dead.
1952                if storage_dead_on_unwind {
1953                    debug_assert_eq!(
1954                        unwind_drops.drop_nodes[unwind_to].data.local,
1955                        drop_data.local
1956                    );
1957                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1958                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1959                }
1960                if let Some(idx) = dropline_to {
1961                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1962                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1963                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1964                }
1965                // Only temps and vars need their storage dead.
1966                assert!(local.index() > arg_count);
1967                cfg.push(block, Statement::new(source_info, StatementKind::StorageDead(local)));
1968            }
1969        }
1970    }
1971    block.unit()
1972}
1973
1974impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1975    /// Build a drop tree for a breakable scope.
1976    ///
1977    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1978    /// loop. Otherwise this is for `break` or `return`.
1979    fn build_exit_tree(
1980        &mut self,
1981        mut drops: DropTree,
1982        else_scope: region::Scope,
1983        span: Span,
1984        continue_block: Option<BasicBlock>,
1985    ) -> Option<BlockAnd<()>> {
1986        let blocks = drops.build_mir::<ExitScopes>(&mut self.cfg, continue_block);
1987        let is_coroutine = self.coroutine.is_some();
1988
1989        // Link the exit drop tree to unwind drop tree.
1990        if drops.drop_nodes.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1991            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1992            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1993            for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated().skip(1) {
1994                match drop_node.data.kind {
1995                    DropKind::Storage | DropKind::ForLint => {
1996                        if is_coroutine {
1997                            let unwind_drop = self
1998                                .scopes
1999                                .unwind_drops
2000                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
2001                            unwind_indices.push(unwind_drop);
2002                        } else {
2003                            unwind_indices.push(unwind_indices[drop_node.next]);
2004                        }
2005                    }
2006                    DropKind::Value => {
2007                        let unwind_drop = self
2008                            .scopes
2009                            .unwind_drops
2010                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
2011                        self.scopes.unwind_drops.add_entry_point(
2012                            blocks[drop_idx].unwrap(),
2013                            unwind_indices[drop_node.next],
2014                        );
2015                        unwind_indices.push(unwind_drop);
2016                    }
2017                }
2018            }
2019        }
2020        // Link the exit drop tree to dropline drop tree (coroutine drop path) for async drops
2021        if is_coroutine
2022            && drops.drop_nodes.iter().any(|DropNode { data, next: _ }| {
2023                data.kind == DropKind::Value && self.is_async_drop(data.local)
2024            })
2025        {
2026            let dropline_target = self.diverge_dropline_target(else_scope, span);
2027            let mut dropline_indices = IndexVec::from_elem_n(dropline_target, 1);
2028            for (drop_idx, drop_data) in drops.drop_nodes.iter_enumerated().skip(1) {
2029                let coroutine_drop = self
2030                    .scopes
2031                    .coroutine_drops
2032                    .add_drop(drop_data.data, dropline_indices[drop_data.next]);
2033                match drop_data.data.kind {
2034                    DropKind::Storage | DropKind::ForLint => {}
2035                    DropKind::Value => {
2036                        if self.is_async_drop(drop_data.data.local) {
2037                            self.scopes.coroutine_drops.add_entry_point(
2038                                blocks[drop_idx].unwrap(),
2039                                dropline_indices[drop_data.next],
2040                            );
2041                        }
2042                    }
2043                }
2044                dropline_indices.push(coroutine_drop);
2045            }
2046        }
2047        blocks[ROOT_NODE].map(BasicBlock::unit)
2048    }
2049
2050    /// Build the unwind and coroutine drop trees.
2051    pub(crate) fn build_drop_trees(&mut self) {
2052        if self.coroutine.is_some() {
2053            self.build_coroutine_drop_trees();
2054        } else {
2055            Self::build_unwind_tree(
2056                &mut self.cfg,
2057                &mut self.scopes.unwind_drops,
2058                self.fn_span,
2059                &mut None,
2060            );
2061        }
2062    }
2063
2064    fn build_coroutine_drop_trees(&mut self) {
2065        // Build the drop tree for dropping the coroutine while it's suspended.
2066        let drops = &mut self.scopes.coroutine_drops;
2067        let cfg = &mut self.cfg;
2068        let fn_span = self.fn_span;
2069        let blocks = drops.build_mir::<CoroutineDrop>(cfg, None);
2070        if let Some(root_block) = blocks[ROOT_NODE] {
2071            cfg.terminate(
2072                root_block,
2073                SourceInfo::outermost(fn_span),
2074                TerminatorKind::CoroutineDrop,
2075            );
2076        }
2077
2078        // Build the drop tree for unwinding in the normal control flow paths.
2079        let resume_block = &mut None;
2080        let unwind_drops = &mut self.scopes.unwind_drops;
2081        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
2082
2083        // Build the drop tree for unwinding when dropping a suspended
2084        // coroutine.
2085        //
2086        // This is a different tree to the standard unwind paths here to
2087        // prevent drop elaboration from creating drop flags that would have
2088        // to be captured by the coroutine. I'm not sure how important this
2089        // optimization is, but it is here.
2090        for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated() {
2091            if let DropKind::Value = drop_node.data.kind
2092                && let Some(bb) = blocks[drop_idx]
2093            {
2094                debug_assert!(drop_node.next < drops.drop_nodes.next_index());
2095                drops.entry_points.push((drop_node.next, bb));
2096            }
2097        }
2098        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
2099    }
2100
2101    fn build_unwind_tree(
2102        cfg: &mut CFG<'tcx>,
2103        drops: &mut DropTree,
2104        fn_span: Span,
2105        resume_block: &mut Option<BasicBlock>,
2106    ) {
2107        let blocks = drops.build_mir::<Unwind>(cfg, *resume_block);
2108        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
2109            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
2110
2111            *resume_block = blocks[ROOT_NODE];
2112        }
2113    }
2114}
2115
2116// DropTreeBuilder implementations.
2117
2118struct ExitScopes;
2119
2120impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
2121    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2122        cfg.start_new_block()
2123    }
2124    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2125        // There should be an existing terminator with real source info and a
2126        // dummy TerminatorKind. Replace it with a proper goto.
2127        // (The dummy is added by `break_scope` and `break_for_else`.)
2128        let term = cfg.block_data_mut(from).terminator_mut();
2129        if let TerminatorKind::UnwindResume = term.kind {
2130            term.kind = TerminatorKind::Goto { target: to };
2131        } else {
2132            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
2133        }
2134    }
2135}
2136
2137struct CoroutineDrop;
2138
2139impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
2140    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2141        cfg.start_new_block()
2142    }
2143    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2144        let term = cfg.block_data_mut(from).terminator_mut();
2145        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
2146            *drop = Some(to);
2147        } else if let TerminatorKind::Drop { ref mut drop, .. } = term.kind {
2148            *drop = Some(to);
2149        } else {
2150            span_bug!(
2151                term.source_info.span,
2152                "cannot enter coroutine drop tree from {:?}",
2153                term.kind
2154            )
2155        }
2156    }
2157}
2158
2159struct Unwind;
2160
2161impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
2162    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2163        cfg.start_new_cleanup_block()
2164    }
2165    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2166        let term = &mut cfg.block_data_mut(from).terminator_mut();
2167        match &mut term.kind {
2168            TerminatorKind::Drop { unwind, .. } => {
2169                if let UnwindAction::Cleanup(unwind) = *unwind {
2170                    let source_info = term.source_info;
2171                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
2172                } else {
2173                    *unwind = UnwindAction::Cleanup(to);
2174                }
2175            }
2176            TerminatorKind::FalseUnwind { unwind, .. }
2177            | TerminatorKind::Call { unwind, .. }
2178            | TerminatorKind::Assert { unwind, .. }
2179            | TerminatorKind::InlineAsm { unwind, .. } => {
2180                *unwind = UnwindAction::Cleanup(to);
2181            }
2182            TerminatorKind::Goto { .. }
2183            | TerminatorKind::SwitchInt { .. }
2184            | TerminatorKind::UnwindResume
2185            | TerminatorKind::UnwindTerminate(_)
2186            | TerminatorKind::Return
2187            | TerminatorKind::TailCall { .. }
2188            | TerminatorKind::Unreachable
2189            | TerminatorKind::Yield { .. }
2190            | TerminatorKind::CoroutineDrop
2191            | TerminatorKind::FalseEdge { .. } => {
2192                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
2193            }
2194        }
2195    }
2196}