rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use interpret::ErrorHandled;
87use rustc_data_structures::fx::FxHashMap;
88use rustc_hir::{self as hir, HirId};
89use rustc_index::{IndexSlice, IndexVec};
90use rustc_middle::middle::region;
91use rustc_middle::mir::{self, *};
92use rustc_middle::thir::{AdtExpr, AdtExprBase, ArmId, ExprId, ExprKind, LintLevel};
93use rustc_middle::ty::{self, Ty, TyCtxt, TypeVisitableExt, ValTree};
94use rustc_middle::{bug, span_bug};
95use rustc_pattern_analysis::rustc::RustcPatCtxt;
96use rustc_session::lint::Level;
97use rustc_span::source_map::Spanned;
98use rustc_span::{DUMMY_SP, Span};
99use tracing::{debug, instrument};
100
101use super::matches::BuiltMatchTree;
102use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
103use crate::errors::{
104    ConstContinueBadConst, ConstContinueNotMonomorphicConst, ConstContinueUnknownJumpTarget,
105};
106
107#[derive(Debug)]
108pub(crate) struct Scopes<'tcx> {
109    scopes: Vec<Scope>,
110
111    /// The current set of breakable scopes. See module comment for more details.
112    breakable_scopes: Vec<BreakableScope<'tcx>>,
113
114    const_continuable_scopes: Vec<ConstContinuableScope<'tcx>>,
115
116    /// The scope of the innermost if-then currently being lowered.
117    if_then_scope: Option<IfThenScope>,
118
119    /// Drops that need to be done on unwind paths. See the comment on
120    /// [DropTree] for more details.
121    unwind_drops: DropTree,
122
123    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
124    coroutine_drops: DropTree,
125}
126
127#[derive(Debug)]
128struct Scope {
129    /// The source scope this scope was created in.
130    source_scope: SourceScope,
131
132    /// the region span of this scope within source code.
133    region_scope: region::Scope,
134
135    /// set of places to drop when exiting this scope. This starts
136    /// out empty but grows as variables are declared during the
137    /// building process. This is a stack, so we always drop from the
138    /// end of the vector (top of the stack) first.
139    drops: Vec<DropData>,
140
141    moved_locals: Vec<Local>,
142
143    /// The drop index that will drop everything in and below this scope on an
144    /// unwind path.
145    cached_unwind_block: Option<DropIdx>,
146
147    /// The drop index that will drop everything in and below this scope on a
148    /// coroutine drop path.
149    cached_coroutine_drop_block: Option<DropIdx>,
150}
151
152#[derive(Clone, Copy, Debug)]
153struct DropData {
154    /// The `Span` where drop obligation was incurred (typically where place was
155    /// declared)
156    source_info: SourceInfo,
157
158    /// local to drop
159    local: Local,
160
161    /// Whether this is a value Drop or a StorageDead.
162    kind: DropKind,
163}
164
165#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
166pub(crate) enum DropKind {
167    Value,
168    Storage,
169    ForLint(BackwardIncompatibleDropReason),
170}
171
172#[derive(Debug)]
173struct BreakableScope<'tcx> {
174    /// Region scope of the loop
175    region_scope: region::Scope,
176    /// The destination of the loop/block expression itself (i.e., where to put
177    /// the result of a `break` or `return` expression)
178    break_destination: Place<'tcx>,
179    /// Drops that happen on the `break`/`return` path.
180    break_drops: DropTree,
181    /// Drops that happen on the `continue` path.
182    continue_drops: Option<DropTree>,
183}
184
185#[derive(Debug)]
186struct ConstContinuableScope<'tcx> {
187    /// The scope for the `#[loop_match]` which its `#[const_continue]`s will jump to.
188    region_scope: region::Scope,
189    /// The place of the state of a `#[loop_match]`, which a `#[const_continue]` must update.
190    state_place: Place<'tcx>,
191
192    arms: Box<[ArmId]>,
193    built_match_tree: BuiltMatchTree<'tcx>,
194
195    /// Drops that happen on a `#[const_continue]`
196    const_continue_drops: DropTree,
197}
198
199#[derive(Debug)]
200struct IfThenScope {
201    /// The if-then scope or arm scope
202    region_scope: region::Scope,
203    /// Drops that happen on the `else` path.
204    else_drops: DropTree,
205}
206
207/// The target of an expression that breaks out of a scope
208#[derive(Clone, Copy, Debug)]
209pub(crate) enum BreakableTarget {
210    Continue(region::Scope),
211    Break(region::Scope),
212    Return,
213}
214
215rustc_index::newtype_index! {
216    #[orderable]
217    struct DropIdx {}
218}
219
220const ROOT_NODE: DropIdx = DropIdx::ZERO;
221
222/// A tree of drops that we have deferred lowering. It's used for:
223///
224/// * Drops on unwind paths
225/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
226/// * Drops on return and loop exit paths
227/// * Drops on the else path in an `if let` chain
228///
229/// Once no more nodes could be added to the tree, we lower it to MIR in one go
230/// in `build_mir`.
231#[derive(Debug)]
232struct DropTree {
233    /// Nodes in the drop tree, containing drop data and a link to the next node.
234    drop_nodes: IndexVec<DropIdx, DropNode>,
235    /// Map for finding the index of an existing node, given its contents.
236    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
237    /// Edges into the `DropTree` that need to be added once it's lowered.
238    entry_points: Vec<(DropIdx, BasicBlock)>,
239}
240
241/// A single node in the drop tree.
242#[derive(Debug)]
243struct DropNode {
244    /// Info about the drop to be performed at this node in the drop tree.
245    data: DropData,
246    /// Index of the "next" drop to perform (in drop order, not declaration order).
247    next: DropIdx,
248}
249
250/// Subset of [`DropNode`] used for reverse lookup in a hash table.
251#[derive(Debug, PartialEq, Eq, Hash)]
252struct DropNodeKey {
253    next: DropIdx,
254    local: Local,
255}
256
257impl Scope {
258    /// Whether there's anything to do for the cleanup path, that is,
259    /// when unwinding through this scope. This includes destructors,
260    /// but not StorageDead statements, which don't get emitted at all
261    /// for unwinding, for several reasons:
262    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
263    ///  * LLVM's memory dependency analysis can't handle it atm
264    ///  * polluting the cleanup MIR with StorageDead creates
265    ///    landing pads even though there's no actual destructors
266    ///  * freeing up stack space has no effect during unwinding
267    /// Note that for coroutines we do emit StorageDeads, for the
268    /// use of optimizations in the MIR coroutine transform.
269    fn needs_cleanup(&self) -> bool {
270        self.drops.iter().any(|drop| match drop.kind {
271            DropKind::Value | DropKind::ForLint(_) => true,
272            DropKind::Storage => false,
273        })
274    }
275
276    fn invalidate_cache(&mut self) {
277        self.cached_unwind_block = None;
278        self.cached_coroutine_drop_block = None;
279    }
280}
281
282/// A trait that determined how [DropTree] creates its blocks and
283/// links to any entry nodes.
284trait DropTreeBuilder<'tcx> {
285    /// Create a new block for the tree. This should call either
286    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
287    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
288
289    /// Links a block outside the drop tree, `from`, to the block `to` inside
290    /// the drop tree.
291    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
292}
293
294impl DropTree {
295    fn new() -> Self {
296        // The root node of the tree doesn't represent a drop, but instead
297        // represents the block in the tree that should be jumped to once all
298        // of the required drops have been performed.
299        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
300        let fake_data =
301            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
302        let drop_nodes = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
303        Self { drop_nodes, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
304    }
305
306    /// Adds a node to the drop tree, consisting of drop data and the index of
307    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
308    ///
309    /// If there is already an equivalent node in the tree, nothing is added, and
310    /// that node's index is returned. Otherwise, the new node's index is returned.
311    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
312        let drop_nodes = &mut self.drop_nodes;
313        *self
314            .existing_drops_map
315            .entry(DropNodeKey { next, local: data.local })
316            // Create a new node, and also add its index to the map.
317            .or_insert_with(|| drop_nodes.push(DropNode { data, next }))
318    }
319
320    /// Registers `from` as an entry point to this drop tree, at `to`.
321    ///
322    /// During [`Self::build_mir`], `from` will be linked to the corresponding
323    /// block within the drop tree.
324    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
325        debug_assert!(to < self.drop_nodes.next_index());
326        self.entry_points.push((to, from));
327    }
328
329    /// Builds the MIR for a given drop tree.
330    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
331        &mut self,
332        cfg: &mut CFG<'tcx>,
333        root_node: Option<BasicBlock>,
334    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
335        debug!("DropTree::build_mir(drops = {:#?})", self);
336
337        let mut blocks = self.assign_blocks::<T>(cfg, root_node);
338        self.link_blocks(cfg, &mut blocks);
339
340        blocks
341    }
342
343    /// Assign blocks for all of the drops in the drop tree that need them.
344    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
345        &mut self,
346        cfg: &mut CFG<'tcx>,
347        root_node: Option<BasicBlock>,
348    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
349        // StorageDead statements can share blocks with each other and also with
350        // a Drop terminator. We iterate through the drops to find which drops
351        // need their own block.
352        #[derive(Clone, Copy)]
353        enum Block {
354            // This drop is unreachable
355            None,
356            // This drop is only reachable through the `StorageDead` with the
357            // specified index.
358            Shares(DropIdx),
359            // This drop has more than one way of being reached, or it is
360            // branched to from outside the tree, or its predecessor is a
361            // `Value` drop.
362            Own,
363        }
364
365        let mut blocks = IndexVec::from_elem(None, &self.drop_nodes);
366        blocks[ROOT_NODE] = root_node;
367
368        let mut needs_block = IndexVec::from_elem(Block::None, &self.drop_nodes);
369        if root_node.is_some() {
370            // In some cases (such as drops for `continue`) the root node
371            // already has a block. In this case, make sure that we don't
372            // override it.
373            needs_block[ROOT_NODE] = Block::Own;
374        }
375
376        // Sort so that we only need to check the last value.
377        let entry_points = &mut self.entry_points;
378        entry_points.sort();
379
380        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
381            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
382                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
383                needs_block[drop_idx] = Block::Own;
384                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
385                    let entry_block = entry_points.pop().unwrap().1;
386                    T::link_entry_point(cfg, entry_block, block);
387                }
388            }
389            match needs_block[drop_idx] {
390                Block::None => continue,
391                Block::Own => {
392                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
393                }
394                Block::Shares(pred) => {
395                    blocks[drop_idx] = blocks[pred];
396                }
397            }
398            if let DropKind::Value = drop_node.data.kind {
399                needs_block[drop_node.next] = Block::Own;
400            } else if drop_idx != ROOT_NODE {
401                match &mut needs_block[drop_node.next] {
402                    pred @ Block::None => *pred = Block::Shares(drop_idx),
403                    pred @ Block::Shares(_) => *pred = Block::Own,
404                    Block::Own => (),
405                }
406            }
407        }
408
409        debug!("assign_blocks: blocks = {:#?}", blocks);
410        assert!(entry_points.is_empty());
411
412        blocks
413    }
414
415    fn link_blocks<'tcx>(
416        &self,
417        cfg: &mut CFG<'tcx>,
418        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
419    ) {
420        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
421            let Some(block) = blocks[drop_idx] else { continue };
422            match drop_node.data.kind {
423                DropKind::Value => {
424                    let terminator = TerminatorKind::Drop {
425                        target: blocks[drop_node.next].unwrap(),
426                        // The caller will handle this if needed.
427                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
428                        place: drop_node.data.local.into(),
429                        replace: false,
430                        drop: None,
431                        async_fut: None,
432                    };
433                    cfg.terminate(block, drop_node.data.source_info, terminator);
434                }
435                DropKind::ForLint(reason) => {
436                    let stmt = Statement::new(
437                        drop_node.data.source_info,
438                        StatementKind::BackwardIncompatibleDropHint {
439                            place: Box::new(drop_node.data.local.into()),
440                            reason,
441                        },
442                    );
443                    cfg.push(block, stmt);
444                    let target = blocks[drop_node.next].unwrap();
445                    if target != block {
446                        // Diagnostics don't use this `Span` but debuginfo
447                        // might. Since we don't want breakpoints to be placed
448                        // here, especially when this is on an unwind path, we
449                        // use `DUMMY_SP`.
450                        let source_info =
451                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
452                        let terminator = TerminatorKind::Goto { target };
453                        cfg.terminate(block, source_info, terminator);
454                    }
455                }
456                // Root nodes don't correspond to a drop.
457                DropKind::Storage if drop_idx == ROOT_NODE => {}
458                DropKind::Storage => {
459                    let stmt = Statement::new(
460                        drop_node.data.source_info,
461                        StatementKind::StorageDead(drop_node.data.local),
462                    );
463                    cfg.push(block, stmt);
464                    let target = blocks[drop_node.next].unwrap();
465                    if target != block {
466                        // Diagnostics don't use this `Span` but debuginfo
467                        // might. Since we don't want breakpoints to be placed
468                        // here, especially when this is on an unwind path, we
469                        // use `DUMMY_SP`.
470                        let source_info =
471                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
472                        let terminator = TerminatorKind::Goto { target };
473                        cfg.terminate(block, source_info, terminator);
474                    }
475                }
476            }
477        }
478    }
479}
480
481impl<'tcx> Scopes<'tcx> {
482    pub(crate) fn new() -> Self {
483        Self {
484            scopes: Vec::new(),
485            breakable_scopes: Vec::new(),
486            const_continuable_scopes: Vec::new(),
487            if_then_scope: None,
488            unwind_drops: DropTree::new(),
489            coroutine_drops: DropTree::new(),
490        }
491    }
492
493    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
494        debug!("push_scope({:?})", region_scope);
495        self.scopes.push(Scope {
496            source_scope: vis_scope,
497            region_scope: region_scope.0,
498            drops: vec![],
499            moved_locals: vec![],
500            cached_unwind_block: None,
501            cached_coroutine_drop_block: None,
502        });
503    }
504
505    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
506        let scope = self.scopes.pop().unwrap();
507        assert_eq!(scope.region_scope, region_scope.0);
508        scope
509    }
510
511    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
512        self.scopes
513            .iter()
514            .rposition(|scope| scope.region_scope == region_scope)
515            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
516    }
517
518    /// Returns the topmost active scope, which is known to be alive until
519    /// the next scope expression.
520    fn topmost(&self) -> region::Scope {
521        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
522    }
523}
524
525impl<'a, 'tcx> Builder<'a, 'tcx> {
526    // Adding and removing scopes
527    // ==========================
528
529    ///  Start a breakable scope, which tracks where `continue`, `break` and
530    ///  `return` should branch to.
531    pub(crate) fn in_breakable_scope<F>(
532        &mut self,
533        loop_block: Option<BasicBlock>,
534        break_destination: Place<'tcx>,
535        span: Span,
536        f: F,
537    ) -> BlockAnd<()>
538    where
539        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
540    {
541        let region_scope = self.scopes.topmost();
542        let scope = BreakableScope {
543            region_scope,
544            break_destination,
545            break_drops: DropTree::new(),
546            continue_drops: loop_block.map(|_| DropTree::new()),
547        };
548        self.scopes.breakable_scopes.push(scope);
549        let normal_exit_block = f(self);
550        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
551        assert!(breakable_scope.region_scope == region_scope);
552        let break_block =
553            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
554        if let Some(drops) = breakable_scope.continue_drops {
555            self.build_exit_tree(drops, region_scope, span, loop_block);
556        }
557        match (normal_exit_block, break_block) {
558            (Some(block), None) | (None, Some(block)) => block,
559            (None, None) => self.cfg.start_new_block().unit(),
560            (Some(normal_block), Some(exit_block)) => {
561                let target = self.cfg.start_new_block();
562                let source_info = self.source_info(span);
563                self.cfg.terminate(
564                    normal_block.into_block(),
565                    source_info,
566                    TerminatorKind::Goto { target },
567                );
568                self.cfg.terminate(
569                    exit_block.into_block(),
570                    source_info,
571                    TerminatorKind::Goto { target },
572                );
573                target.unit()
574            }
575        }
576    }
577
578    /// Start a const-continuable scope, which tracks where `#[const_continue] break` should
579    /// branch to.
580    pub(crate) fn in_const_continuable_scope<F>(
581        &mut self,
582        arms: Box<[ArmId]>,
583        built_match_tree: BuiltMatchTree<'tcx>,
584        state_place: Place<'tcx>,
585        span: Span,
586        f: F,
587    ) -> BlockAnd<()>
588    where
589        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
590    {
591        let region_scope = self.scopes.topmost();
592        let scope = ConstContinuableScope {
593            region_scope,
594            state_place,
595            const_continue_drops: DropTree::new(),
596            arms,
597            built_match_tree,
598        };
599        self.scopes.const_continuable_scopes.push(scope);
600        let normal_exit_block = f(self);
601        let const_continue_scope = self.scopes.const_continuable_scopes.pop().unwrap();
602        assert!(const_continue_scope.region_scope == region_scope);
603
604        let break_block = self.build_exit_tree(
605            const_continue_scope.const_continue_drops,
606            region_scope,
607            span,
608            None,
609        );
610
611        match (normal_exit_block, break_block) {
612            (block, None) => block,
613            (normal_block, Some(exit_block)) => {
614                let target = self.cfg.start_new_block();
615                let source_info = self.source_info(span);
616                self.cfg.terminate(
617                    normal_block.into_block(),
618                    source_info,
619                    TerminatorKind::Goto { target },
620                );
621                self.cfg.terminate(
622                    exit_block.into_block(),
623                    source_info,
624                    TerminatorKind::Goto { target },
625                );
626                target.unit()
627            }
628        }
629    }
630
631    /// Start an if-then scope which tracks drop for `if` expressions and `if`
632    /// guards.
633    ///
634    /// For an if-let chain:
635    ///
636    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
637    ///
638    /// There are three possible ways the condition can be false and we may have
639    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
640    /// To handle this correctly we use a `DropTree` in a similar way to a
641    /// `loop` expression and 'break' out on all of the 'else' paths.
642    ///
643    /// Notes:
644    /// - We don't need to keep a stack of scopes in the `Builder` because the
645    ///   'else' paths will only leave the innermost scope.
646    /// - This is also used for match guards.
647    pub(crate) fn in_if_then_scope<F>(
648        &mut self,
649        region_scope: region::Scope,
650        span: Span,
651        f: F,
652    ) -> (BasicBlock, BasicBlock)
653    where
654        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
655    {
656        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
657        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
658
659        let then_block = f(self).into_block();
660
661        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
662        assert!(if_then_scope.region_scope == region_scope);
663
664        let else_block =
665            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
666                || self.cfg.start_new_block(),
667                |else_block_and| else_block_and.into_block(),
668            );
669
670        (then_block, else_block)
671    }
672
673    /// Convenience wrapper that pushes a scope and then executes `f`
674    /// to build its contents, popping the scope afterwards.
675    #[instrument(skip(self, f), level = "debug")]
676    pub(crate) fn in_scope<F, R>(
677        &mut self,
678        region_scope: (region::Scope, SourceInfo),
679        lint_level: LintLevel,
680        f: F,
681    ) -> BlockAnd<R>
682    where
683        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
684    {
685        let source_scope = self.source_scope;
686        if let LintLevel::Explicit(current_hir_id) = lint_level {
687            let parent_id =
688                self.source_scopes[source_scope].local_data.as_ref().unwrap_crate_local().lint_root;
689            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
690        }
691        self.push_scope(region_scope);
692        let mut block;
693        let rv = unpack!(block = f(self));
694        block = self.pop_scope(region_scope, block).into_block();
695        self.source_scope = source_scope;
696        debug!(?block);
697        block.and(rv)
698    }
699
700    /// Convenience wrapper that executes `f` either within the current scope or a new scope.
701    /// Used for pattern matching, which introduces an additional scope for patterns with guards.
702    pub(crate) fn opt_in_scope<R>(
703        &mut self,
704        opt_region_scope: Option<(region::Scope, SourceInfo)>,
705        f: impl FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
706    ) -> BlockAnd<R> {
707        if let Some(region_scope) = opt_region_scope {
708            self.in_scope(region_scope, LintLevel::Inherited, f)
709        } else {
710            f(self)
711        }
712    }
713
714    /// Push a scope onto the stack. You can then build code in this
715    /// scope and call `pop_scope` afterwards. Note that these two
716    /// calls must be paired; using `in_scope` as a convenience
717    /// wrapper maybe preferable.
718    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
719        self.scopes.push_scope(region_scope, self.source_scope);
720    }
721
722    /// Pops a scope, which should have region scope `region_scope`,
723    /// adding any drops onto the end of `block` that are needed.
724    /// This must match 1-to-1 with `push_scope`.
725    pub(crate) fn pop_scope(
726        &mut self,
727        region_scope: (region::Scope, SourceInfo),
728        mut block: BasicBlock,
729    ) -> BlockAnd<()> {
730        debug!("pop_scope({:?}, {:?})", region_scope, block);
731
732        block = self.leave_top_scope(block);
733
734        self.scopes.pop_scope(region_scope);
735
736        block.unit()
737    }
738
739    /// Sets up the drops for breaking from `block` to `target`.
740    pub(crate) fn break_scope(
741        &mut self,
742        mut block: BasicBlock,
743        value: Option<ExprId>,
744        target: BreakableTarget,
745        source_info: SourceInfo,
746    ) -> BlockAnd<()> {
747        let span = source_info.span;
748
749        let get_scope_index = |scope: region::Scope| {
750            // find the loop-scope by its `region::Scope`.
751            self.scopes
752                .breakable_scopes
753                .iter()
754                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
755                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
756        };
757        let (break_index, destination) = match target {
758            BreakableTarget::Return => {
759                let scope = &self.scopes.breakable_scopes[0];
760                if scope.break_destination != Place::return_place() {
761                    span_bug!(span, "`return` in item with no return scope");
762                }
763                (0, Some(scope.break_destination))
764            }
765            BreakableTarget::Break(scope) => {
766                let break_index = get_scope_index(scope);
767                let scope = &self.scopes.breakable_scopes[break_index];
768                (break_index, Some(scope.break_destination))
769            }
770            BreakableTarget::Continue(scope) => {
771                let break_index = get_scope_index(scope);
772                (break_index, None)
773            }
774        };
775
776        match (destination, value) {
777            (Some(destination), Some(value)) => {
778                debug!("stmt_expr Break val block_context.push(SubExpr)");
779                self.block_context.push(BlockFrame::SubExpr);
780                block = self.expr_into_dest(destination, block, value).into_block();
781                self.block_context.pop();
782            }
783            (Some(destination), None) => {
784                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
785            }
786            (None, Some(_)) => {
787                panic!("`return`, `become` and `break` with value and must have a destination")
788            }
789            (None, None) => {
790                if self.tcx.sess.instrument_coverage() {
791                    // Normally we wouldn't build any MIR in this case, but that makes it
792                    // harder for coverage instrumentation to extract a relevant span for
793                    // `continue` expressions. So here we inject a dummy statement with the
794                    // desired span.
795                    self.cfg.push_coverage_span_marker(block, source_info);
796                }
797            }
798        }
799
800        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
801        let scope_index = self.scopes.scope_index(region_scope, span);
802        let drops = if destination.is_some() {
803            &mut self.scopes.breakable_scopes[break_index].break_drops
804        } else {
805            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
806            else {
807                self.tcx.dcx().span_delayed_bug(
808                    source_info.span,
809                    "unlabelled `continue` within labelled block",
810                );
811                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
812
813                return self.cfg.start_new_block().unit();
814            };
815            drops
816        };
817
818        let mut drop_idx = ROOT_NODE;
819        for scope in &self.scopes.scopes[scope_index + 1..] {
820            for drop in &scope.drops {
821                drop_idx = drops.add_drop(*drop, drop_idx);
822            }
823        }
824        drops.add_entry_point(block, drop_idx);
825
826        // `build_drop_trees` doesn't have access to our source_info, so we
827        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
828        // because MIR type checking will panic if it hasn't been overwritten.
829        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
830        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
831
832        self.cfg.start_new_block().unit()
833    }
834
835    /// Based on `FunctionCx::eval_unevaluated_mir_constant_to_valtree`.
836    fn eval_unevaluated_mir_constant_to_valtree(
837        &self,
838        constant: ConstOperand<'tcx>,
839    ) -> Result<(ty::ValTree<'tcx>, Ty<'tcx>), interpret::ErrorHandled> {
840        assert!(!constant.const_.ty().has_param());
841        let (uv, ty) = match constant.const_ {
842            mir::Const::Unevaluated(uv, ty) => (uv.shrink(), ty),
843            mir::Const::Ty(_, c) => match c.kind() {
844                // A constant that came from a const generic but was then used as an argument to
845                // old-style simd_shuffle (passing as argument instead of as a generic param).
846                ty::ConstKind::Value(cv) => return Ok((cv.valtree, cv.ty)),
847                other => span_bug!(constant.span, "{other:#?}"),
848            },
849            mir::Const::Val(mir::ConstValue::Scalar(mir::interpret::Scalar::Int(val)), ty) => {
850                return Ok((ValTree::from_scalar_int(self.tcx, val), ty));
851            }
852            // We should never encounter `Const::Val` unless MIR opts (like const prop) evaluate
853            // a constant and write that value back into `Operand`s. This could happen, but is
854            // unlikely. Also: all users of `simd_shuffle` are on unstable and already need to take
855            // a lot of care around intrinsics. For an issue to happen here, it would require a
856            // macro expanding to a `simd_shuffle` call without wrapping the constant argument in a
857            // `const {}` block, but the user pass through arbitrary expressions.
858
859            // FIXME(oli-obk): Replace the magic const generic argument of `simd_shuffle` with a
860            // real const generic, and get rid of this entire function.
861            other => span_bug!(constant.span, "{other:#?}"),
862        };
863
864        match self.tcx.const_eval_resolve_for_typeck(self.typing_env(), uv, constant.span) {
865            Ok(Ok(valtree)) => Ok((valtree, ty)),
866            Ok(Err(ty)) => span_bug!(constant.span, "could not convert {ty:?} to a valtree"),
867            Err(e) => Err(e),
868        }
869    }
870
871    /// Sets up the drops for jumping from `block` to `scope`.
872    pub(crate) fn break_const_continuable_scope(
873        &mut self,
874        mut block: BasicBlock,
875        value: ExprId,
876        scope: region::Scope,
877        source_info: SourceInfo,
878    ) -> BlockAnd<()> {
879        let span = source_info.span;
880
881        // A break can only break out of a scope, so the value should be a scope.
882        let rustc_middle::thir::ExprKind::Scope { value, .. } = self.thir[value].kind else {
883            span_bug!(span, "break value must be a scope")
884        };
885
886        let expr = &self.thir[value];
887        let constant = match &expr.kind {
888            ExprKind::Adt(box AdtExpr { variant_index, fields, base, .. }) => {
889                assert!(matches!(base, AdtExprBase::None));
890                assert!(fields.is_empty());
891                ConstOperand {
892                    span: self.thir[value].span,
893                    user_ty: None,
894                    const_: Const::Ty(
895                        self.thir[value].ty,
896                        ty::Const::new_value(
897                            self.tcx,
898                            ValTree::from_branches(
899                                self.tcx,
900                                [ValTree::from_scalar_int(self.tcx, variant_index.as_u32().into())],
901                            ),
902                            self.thir[value].ty,
903                        ),
904                    ),
905                }
906            }
907
908            ExprKind::Literal { .. }
909            | ExprKind::NonHirLiteral { .. }
910            | ExprKind::ZstLiteral { .. }
911            | ExprKind::NamedConst { .. } => self.as_constant(&self.thir[value]),
912
913            other => {
914                use crate::errors::ConstContinueNotMonomorphicConstReason as Reason;
915
916                let span = expr.span;
917                let reason = match other {
918                    ExprKind::ConstParam { .. } => Reason::ConstantParameter { span },
919                    ExprKind::ConstBlock { .. } => Reason::ConstBlock { span },
920                    _ => Reason::Other { span },
921                };
922
923                self.tcx
924                    .dcx()
925                    .emit_err(ConstContinueNotMonomorphicConst { span: expr.span, reason });
926                return block.unit();
927            }
928        };
929
930        let break_index = self
931            .scopes
932            .const_continuable_scopes
933            .iter()
934            .rposition(|const_continuable_scope| const_continuable_scope.region_scope == scope)
935            .unwrap_or_else(|| span_bug!(span, "no enclosing const-continuable scope found"));
936
937        let scope = &self.scopes.const_continuable_scopes[break_index];
938
939        let state_decl = &self.local_decls[scope.state_place.as_local().unwrap()];
940        let state_ty = state_decl.ty;
941        let (discriminant_ty, rvalue) = match state_ty.kind() {
942            ty::Adt(adt_def, _) if adt_def.is_enum() => {
943                (state_ty.discriminant_ty(self.tcx), Rvalue::Discriminant(scope.state_place))
944            }
945            ty::Uint(_) | ty::Int(_) | ty::Float(_) | ty::Bool | ty::Char => {
946                (state_ty, Rvalue::Use(Operand::Copy(scope.state_place)))
947            }
948            _ => span_bug!(state_decl.source_info.span, "unsupported #[loop_match] state"),
949        };
950
951        // The `PatCtxt` is normally used in pattern exhaustiveness checking, but reused
952        // here because it performs normalization and const evaluation.
953        let dropless_arena = rustc_arena::DroplessArena::default();
954        let typeck_results = self.tcx.typeck(self.def_id);
955        let cx = RustcPatCtxt {
956            tcx: self.tcx,
957            typeck_results,
958            module: self.tcx.parent_module(self.hir_id).to_def_id(),
959            // FIXME(#132279): We're in a body, should handle opaques.
960            typing_env: rustc_middle::ty::TypingEnv::non_body_analysis(self.tcx, self.def_id),
961            dropless_arena: &dropless_arena,
962            match_lint_level: self.hir_id,
963            whole_match_span: Some(rustc_span::Span::default()),
964            scrut_span: rustc_span::Span::default(),
965            refutable: true,
966            known_valid_scrutinee: true,
967            internal_state: Default::default(),
968        };
969
970        let valtree = match self.eval_unevaluated_mir_constant_to_valtree(constant) {
971            Ok((valtree, ty)) => {
972                // Defensively check that the type is monomorphic.
973                assert!(!ty.has_param());
974
975                valtree
976            }
977            Err(ErrorHandled::Reported(..)) => {
978                return block.unit();
979            }
980            Err(ErrorHandled::TooGeneric(_)) => {
981                self.tcx.dcx().emit_fatal(ConstContinueBadConst { span: constant.span });
982            }
983        };
984
985        let Some(real_target) =
986            self.static_pattern_match(&cx, valtree, &*scope.arms, &scope.built_match_tree)
987        else {
988            self.tcx.dcx().emit_fatal(ConstContinueUnknownJumpTarget { span })
989        };
990
991        self.block_context.push(BlockFrame::SubExpr);
992        let state_place = scope.state_place;
993        block = self.expr_into_dest(state_place, block, value).into_block();
994        self.block_context.pop();
995
996        let discr = self.temp(discriminant_ty, source_info.span);
997        let scope_index = self
998            .scopes
999            .scope_index(self.scopes.const_continuable_scopes[break_index].region_scope, span);
1000        let scope = &mut self.scopes.const_continuable_scopes[break_index];
1001        self.cfg.push_assign(block, source_info, discr, rvalue);
1002        let drop_and_continue_block = self.cfg.start_new_block();
1003        let imaginary_target = self.cfg.start_new_block();
1004        self.cfg.terminate(
1005            block,
1006            source_info,
1007            TerminatorKind::FalseEdge { real_target: drop_and_continue_block, imaginary_target },
1008        );
1009
1010        let drops = &mut scope.const_continue_drops;
1011
1012        let drop_idx = self.scopes.scopes[scope_index + 1..]
1013            .iter()
1014            .flat_map(|scope| &scope.drops)
1015            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
1016
1017        drops.add_entry_point(imaginary_target, drop_idx);
1018
1019        self.cfg.terminate(imaginary_target, source_info, TerminatorKind::UnwindResume);
1020
1021        let region_scope = scope.region_scope;
1022        let scope_index = self.scopes.scope_index(region_scope, span);
1023        let mut drops = DropTree::new();
1024
1025        let drop_idx = self.scopes.scopes[scope_index + 1..]
1026            .iter()
1027            .flat_map(|scope| &scope.drops)
1028            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
1029
1030        drops.add_entry_point(drop_and_continue_block, drop_idx);
1031
1032        // `build_drop_trees` doesn't have access to our source_info, so we
1033        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
1034        // because MIR type checking will panic if it hasn't been overwritten.
1035        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
1036        self.cfg.terminate(drop_and_continue_block, source_info, TerminatorKind::UnwindResume);
1037
1038        self.build_exit_tree(drops, region_scope, span, Some(real_target));
1039
1040        return self.cfg.start_new_block().unit();
1041    }
1042
1043    /// Sets up the drops for breaking from `block` due to an `if` condition
1044    /// that turned out to be false.
1045    ///
1046    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
1047    /// there is an if-then scope to tell us what the target scope is.
1048    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
1049        let if_then_scope = self
1050            .scopes
1051            .if_then_scope
1052            .as_ref()
1053            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
1054
1055        let target = if_then_scope.region_scope;
1056        let scope_index = self.scopes.scope_index(target, source_info.span);
1057
1058        // Upgrade `if_then_scope` to `&mut`.
1059        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
1060
1061        let mut drop_idx = ROOT_NODE;
1062        let drops = &mut if_then_scope.else_drops;
1063        for scope in &self.scopes.scopes[scope_index + 1..] {
1064            for drop in &scope.drops {
1065                drop_idx = drops.add_drop(*drop, drop_idx);
1066            }
1067        }
1068        drops.add_entry_point(block, drop_idx);
1069
1070        // `build_drop_trees` doesn't have access to our source_info, so we
1071        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
1072        // because MIR type checking will panic if it hasn't been overwritten.
1073        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
1074        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
1075    }
1076
1077    /// Sets up the drops for explicit tail calls.
1078    ///
1079    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
1080    /// Instead, all scheduled drops are immediately added to the CFG.
1081    pub(crate) fn break_for_tail_call(
1082        &mut self,
1083        mut block: BasicBlock,
1084        args: &[Spanned<Operand<'tcx>>],
1085        source_info: SourceInfo,
1086    ) -> BlockAnd<()> {
1087        let arg_drops: Vec<_> = args
1088            .iter()
1089            .rev()
1090            .filter_map(|arg| match &arg.node {
1091                Operand::Copy(_) => bug!("copy op in tail call args"),
1092                Operand::Move(place) => {
1093                    let local =
1094                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
1095
1096                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1097                        return None;
1098                    }
1099
1100                    Some(DropData { source_info, local, kind: DropKind::Value })
1101                }
1102                Operand::Constant(_) => None,
1103            })
1104            .collect();
1105
1106        let mut unwind_to = self.diverge_cleanup_target(
1107            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
1108            DUMMY_SP,
1109        );
1110        let typing_env = self.typing_env();
1111        let unwind_drops = &mut self.scopes.unwind_drops;
1112
1113        // the innermost scope contains only the destructors for the tail call arguments
1114        // we only want to drop these in case of a panic, so we skip it
1115        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
1116            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
1117            for drop_data in scope.drops.iter().rev() {
1118                let source_info = drop_data.source_info;
1119                let local = drop_data.local;
1120
1121                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
1122                    continue;
1123                }
1124
1125                match drop_data.kind {
1126                    DropKind::Value => {
1127                        // `unwind_to` should drop the value that we're about to
1128                        // schedule. If dropping this value panics, then we continue
1129                        // with the *next* value on the unwind path.
1130                        debug_assert_eq!(
1131                            unwind_drops.drop_nodes[unwind_to].data.local,
1132                            drop_data.local
1133                        );
1134                        debug_assert_eq!(
1135                            unwind_drops.drop_nodes[unwind_to].data.kind,
1136                            drop_data.kind
1137                        );
1138                        unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1139
1140                        let mut unwind_entry_point = unwind_to;
1141
1142                        // the tail call arguments must be dropped if any of these drops panic
1143                        for drop in arg_drops.iter().copied() {
1144                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
1145                        }
1146
1147                        unwind_drops.add_entry_point(block, unwind_entry_point);
1148
1149                        let next = self.cfg.start_new_block();
1150                        self.cfg.terminate(
1151                            block,
1152                            source_info,
1153                            TerminatorKind::Drop {
1154                                place: local.into(),
1155                                target: next,
1156                                unwind: UnwindAction::Continue,
1157                                replace: false,
1158                                drop: None,
1159                                async_fut: None,
1160                            },
1161                        );
1162                        block = next;
1163                    }
1164                    DropKind::ForLint(reason) => {
1165                        self.cfg.push(
1166                            block,
1167                            Statement::new(
1168                                source_info,
1169                                StatementKind::BackwardIncompatibleDropHint {
1170                                    place: Box::new(local.into()),
1171                                    reason,
1172                                },
1173                            ),
1174                        );
1175                    }
1176                    DropKind::Storage => {
1177                        // Only temps and vars need their storage dead.
1178                        assert!(local.index() > self.arg_count);
1179                        self.cfg.push(
1180                            block,
1181                            Statement::new(source_info, StatementKind::StorageDead(local)),
1182                        );
1183                    }
1184                }
1185            }
1186        }
1187
1188        block.unit()
1189    }
1190
1191    fn is_async_drop_impl(
1192        tcx: TyCtxt<'tcx>,
1193        local_decls: &IndexVec<Local, LocalDecl<'tcx>>,
1194        typing_env: ty::TypingEnv<'tcx>,
1195        local: Local,
1196    ) -> bool {
1197        let ty = local_decls[local].ty;
1198        if ty.is_async_drop(tcx, typing_env) || ty.is_coroutine() {
1199            return true;
1200        }
1201        ty.needs_async_drop(tcx, typing_env)
1202    }
1203    fn is_async_drop(&self, local: Local) -> bool {
1204        Self::is_async_drop_impl(self.tcx, &self.local_decls, self.typing_env(), local)
1205    }
1206
1207    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
1208        // If we are emitting a `drop` statement, we need to have the cached
1209        // diverge cleanup pads ready in case that drop panics.
1210        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
1211        let is_coroutine = self.coroutine.is_some();
1212        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
1213
1214        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1215        let has_async_drops = is_coroutine
1216            && scope.drops.iter().any(|v| v.kind == DropKind::Value && self.is_async_drop(v.local));
1217        let dropline_to = if has_async_drops { Some(self.diverge_dropline()) } else { None };
1218        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1219        let typing_env = self.typing_env();
1220        build_scope_drops(
1221            &mut self.cfg,
1222            &mut self.scopes.unwind_drops,
1223            &mut self.scopes.coroutine_drops,
1224            scope,
1225            block,
1226            unwind_to,
1227            dropline_to,
1228            is_coroutine && needs_cleanup,
1229            self.arg_count,
1230            |v: Local| Self::is_async_drop_impl(self.tcx, &self.local_decls, typing_env, v),
1231        )
1232        .into_block()
1233    }
1234
1235    /// Possibly creates a new source scope if `current_root` and `parent_root`
1236    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
1237    pub(crate) fn maybe_new_source_scope(
1238        &mut self,
1239        span: Span,
1240        current_id: HirId,
1241        parent_id: HirId,
1242    ) {
1243        let (current_root, parent_root) =
1244            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
1245                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
1246                // the only part of rustc that tracks MIR -> HIR is the
1247                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
1248                // locations. Normally the number of source scopes is limited to the set of nodes
1249                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
1250                // behavior to maximize the number of source scopes, increasing the granularity of
1251                // the MIR->HIR mapping.
1252                (current_id, parent_id)
1253            } else {
1254                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
1255                // parents. We estimate the true lint roots here to avoid creating a lot of source
1256                // scopes.
1257                (
1258                    self.maybe_lint_level_root_bounded(current_id),
1259                    if parent_id == self.hir_id {
1260                        parent_id // this is very common
1261                    } else {
1262                        self.maybe_lint_level_root_bounded(parent_id)
1263                    },
1264                )
1265            };
1266
1267        if current_root != parent_root {
1268            let lint_level = LintLevel::Explicit(current_root);
1269            self.source_scope = self.new_source_scope(span, lint_level);
1270        }
1271    }
1272
1273    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
1274    /// It stops at `self.hir_id` and just returns it if reached.
1275    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
1276        // This assertion lets us just store `ItemLocalId` in the cache, rather
1277        // than the full `HirId`.
1278        assert_eq!(orig_id.owner, self.hir_id.owner);
1279
1280        let mut id = orig_id;
1281        loop {
1282            if id == self.hir_id {
1283                // This is a moderately common case, mostly hit for previously unseen nodes.
1284                break;
1285            }
1286
1287            if self.tcx.hir_attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
1288                // This is a rare case. It's for a node path that doesn't reach the root due to an
1289                // intervening lint level attribute. This result doesn't get cached.
1290                return id;
1291            }
1292
1293            let next = self.tcx.parent_hir_id(id);
1294            if next == id {
1295                bug!("lint traversal reached the root of the crate");
1296            }
1297            id = next;
1298
1299            // This lookup is just an optimization; it can be removed without affecting
1300            // functionality. It might seem strange to see this at the end of this loop, but the
1301            // `orig_id` passed in to this function is almost always previously unseen, for which a
1302            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
1303            // cache lookups have a very high hit rate.
1304            if self.lint_level_roots_cache.contains(id.local_id) {
1305                break;
1306            }
1307        }
1308
1309        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
1310        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
1311        // The performance wins from the cache all come from caching non-leaf nodes.
1312        self.lint_level_roots_cache.insert(orig_id.local_id);
1313        self.hir_id
1314    }
1315
1316    /// Creates a new source scope, nested in the current one.
1317    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
1318        let parent = self.source_scope;
1319        debug!(
1320            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
1321            span,
1322            lint_level,
1323            parent,
1324            self.source_scopes.get(parent)
1325        );
1326        let scope_local_data = SourceScopeLocalData {
1327            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
1328                lint_root
1329            } else {
1330                self.source_scopes[parent].local_data.as_ref().unwrap_crate_local().lint_root
1331            },
1332        };
1333        self.source_scopes.push(SourceScopeData {
1334            span,
1335            parent_scope: Some(parent),
1336            inlined: None,
1337            inlined_parent_scope: None,
1338            local_data: ClearCrossCrate::Set(scope_local_data),
1339        })
1340    }
1341
1342    /// Given a span and the current source scope, make a SourceInfo.
1343    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1344        SourceInfo { span, scope: self.source_scope }
1345    }
1346
1347    // Finding scopes
1348    // ==============
1349
1350    /// Returns the scope that we should use as the lifetime of an
1351    /// operand. Basically, an operand must live until it is consumed.
1352    /// This is similar to, but not quite the same as, the temporary
1353    /// scope (which can be larger or smaller).
1354    ///
1355    /// Consider:
1356    /// ```ignore (illustrative)
1357    /// let x = foo(bar(X, Y));
1358    /// ```
1359    /// We wish to pop the storage for X and Y after `bar()` is
1360    /// called, not after the whole `let` is completed.
1361    ///
1362    /// As another example, if the second argument diverges:
1363    /// ```ignore (illustrative)
1364    /// foo(Box::new(2), panic!())
1365    /// ```
1366    /// We would allocate the box but then free it on the unwinding
1367    /// path; we would also emit a free on the 'success' path from
1368    /// panic, but that will turn out to be removed as dead-code.
1369    pub(crate) fn local_scope(&self) -> region::Scope {
1370        self.scopes.topmost()
1371    }
1372
1373    // Scheduling drops
1374    // ================
1375
1376    pub(crate) fn schedule_drop_storage_and_value(
1377        &mut self,
1378        span: Span,
1379        region_scope: region::Scope,
1380        local: Local,
1381    ) {
1382        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1383        self.schedule_drop(span, region_scope, local, DropKind::Value);
1384    }
1385
1386    /// Indicates that `place` should be dropped on exit from `region_scope`.
1387    ///
1388    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1389    /// place, or a function parameter.
1390    pub(crate) fn schedule_drop(
1391        &mut self,
1392        span: Span,
1393        region_scope: region::Scope,
1394        local: Local,
1395        drop_kind: DropKind,
1396    ) {
1397        let needs_drop = match drop_kind {
1398            DropKind::Value | DropKind::ForLint(_) => {
1399                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1400                    return;
1401                }
1402                true
1403            }
1404            DropKind::Storage => {
1405                if local.index() <= self.arg_count {
1406                    span_bug!(
1407                        span,
1408                        "`schedule_drop` called with body argument {:?} \
1409                        but its storage does not require a drop",
1410                        local,
1411                    )
1412                }
1413                false
1414            }
1415        };
1416
1417        // When building drops, we try to cache chains of drops to reduce the
1418        // number of `DropTree::add_drop` calls. This, however, means that
1419        // whenever we add a drop into a scope which already had some entries
1420        // in the drop tree built (and thus, cached) for it, we must invalidate
1421        // all caches which might branch into the scope which had a drop just
1422        // added to it. This is necessary, because otherwise some other code
1423        // might use the cache to branch into already built chain of drops,
1424        // essentially ignoring the newly added drop.
1425        //
1426        // For example consider there’s two scopes with a drop in each. These
1427        // are built and thus the caches are filled:
1428        //
1429        // +--------------------------------------------------------+
1430        // | +---------------------------------+                    |
1431        // | | +--------+     +-------------+  |  +---------------+ |
1432        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1433        // | | +--------+     +-------------+  |  +---------------+ |
1434        // | +------------|outer_scope cache|--+                    |
1435        // +------------------------------|middle_scope cache|------+
1436        //
1437        // Now, a new, innermost scope is added along with a new drop into
1438        // both innermost and outermost scopes:
1439        //
1440        // +------------------------------------------------------------+
1441        // | +----------------------------------+                       |
1442        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1443        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1444        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1445        // | |             +-+ +-------------+  |                       |
1446        // | +---|invalid outer_scope cache|----+                       |
1447        // +----=----------------|invalid middle_scope cache|-----------+
1448        //
1449        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1450        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1451        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1452        // wrong results.
1453        //
1454        // Note that this code iterates scopes from the innermost to the outermost,
1455        // invalidating caches of each scope visited. This way bare minimum of the
1456        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1457        // cache of outer scope stays intact.
1458        //
1459        // Since we only cache drops for the unwind path and the coroutine drop
1460        // path, we only need to invalidate the cache for drops that happen on
1461        // the unwind or coroutine drop paths. This means that for
1462        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1463        let invalidate_caches = needs_drop || self.coroutine.is_some();
1464        for scope in self.scopes.scopes.iter_mut().rev() {
1465            if invalidate_caches {
1466                scope.invalidate_cache();
1467            }
1468
1469            if scope.region_scope == region_scope {
1470                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1471                // Attribute scope exit drops to scope's closing brace.
1472                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1473
1474                scope.drops.push(DropData {
1475                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1476                    local,
1477                    kind: drop_kind,
1478                });
1479
1480                return;
1481            }
1482        }
1483
1484        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1485    }
1486
1487    /// Schedule emission of a backwards incompatible drop lint hint.
1488    /// Applicable only to temporary values for now.
1489    #[instrument(level = "debug", skip(self))]
1490    pub(crate) fn schedule_backwards_incompatible_drop(
1491        &mut self,
1492        span: Span,
1493        region_scope: region::Scope,
1494        local: Local,
1495        reason: BackwardIncompatibleDropReason,
1496    ) {
1497        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1498        // We need to know all of them so that we can capture potential borrow-checking errors.
1499        for scope in self.scopes.scopes.iter_mut().rev() {
1500            // Since we are inserting linting MIR statement, we have to invalidate the caches
1501            scope.invalidate_cache();
1502            if scope.region_scope == region_scope {
1503                // We'll be using this span in diagnostics, so let's make sure it points to the end
1504                // end of the block, not just the end of the tail expression.
1505                let region_scope_span = if reason
1506                    == BackwardIncompatibleDropReason::MacroExtendedScope
1507                    && let Some(scope_hir_id) = region_scope.hir_id(self.region_scope_tree)
1508                    && let hir::Node::Expr(expr) = self.tcx.hir_node(scope_hir_id)
1509                    && let hir::Node::Block(blk) = self.tcx.parent_hir_node(expr.hir_id)
1510                {
1511                    blk.span
1512                } else {
1513                    region_scope.span(self.tcx, self.region_scope_tree)
1514                };
1515                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1516
1517                scope.drops.push(DropData {
1518                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1519                    local,
1520                    kind: DropKind::ForLint(reason),
1521                });
1522
1523                return;
1524            }
1525        }
1526        span_bug!(
1527            span,
1528            "region scope {:?} not in scope to drop {:?} for linting",
1529            region_scope,
1530            local
1531        );
1532    }
1533
1534    /// Indicates that the "local operand" stored in `local` is
1535    /// *moved* at some point during execution (see `local_scope` for
1536    /// more information about what a "local operand" is -- in short,
1537    /// it's an intermediate operand created as part of preparing some
1538    /// MIR instruction). We use this information to suppress
1539    /// redundant drops on the non-unwind paths. This results in less
1540    /// MIR, but also avoids spurious borrow check errors
1541    /// (c.f. #64391).
1542    ///
1543    /// Example: when compiling the call to `foo` here:
1544    ///
1545    /// ```ignore (illustrative)
1546    /// foo(bar(), ...)
1547    /// ```
1548    ///
1549    /// we would evaluate `bar()` to an operand `_X`. We would also
1550    /// schedule `_X` to be dropped when the expression scope for
1551    /// `foo(bar())` is exited. This is relevant, for example, if the
1552    /// later arguments should unwind (it would ensure that `_X` gets
1553    /// dropped). However, if no unwind occurs, then `_X` will be
1554    /// unconditionally consumed by the `call`:
1555    ///
1556    /// ```ignore (illustrative)
1557    /// bb {
1558    ///   ...
1559    ///   _R = CALL(foo, _X, ...)
1560    /// }
1561    /// ```
1562    ///
1563    /// However, `_X` is still registered to be dropped, and so if we
1564    /// do nothing else, we would generate a `DROP(_X)` that occurs
1565    /// after the call. This will later be optimized out by the
1566    /// drop-elaboration code, but in the meantime it can lead to
1567    /// spurious borrow-check errors -- the problem, ironically, is
1568    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1569    /// that it creates. See #64391 for an example.
1570    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1571        let local_scope = self.local_scope();
1572        let scope = self.scopes.scopes.last_mut().unwrap();
1573
1574        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1575
1576        // look for moves of a local variable, like `MOVE(_X)`
1577        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1578            Operand::Copy(_) | Operand::Constant(_) => None,
1579            Operand::Move(place) => place.as_local(),
1580        });
1581
1582        for local in locals_moved {
1583            // check if we have a Drop for this operand and -- if so
1584            // -- add it to the list of moved operands. Note that this
1585            // local might not have been an operand created for this
1586            // call, it could come from other places too.
1587            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1588                scope.moved_locals.push(local);
1589            }
1590        }
1591    }
1592
1593    // Other
1594    // =====
1595
1596    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1597    /// this point. The `DropIdx` will be created if it doesn't already exist.
1598    fn diverge_cleanup(&mut self) -> DropIdx {
1599        // It is okay to use dummy span because the getting scope index on the topmost scope
1600        // must always succeed.
1601        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1602    }
1603
1604    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1605    /// some ancestor scope instead of the current scope.
1606    /// It is possible to unwind to some ancestor scope if some drop panics as
1607    /// the program breaks out of a if-then scope.
1608    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1609        let target = self.scopes.scope_index(target_scope, span);
1610        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1611            .iter()
1612            .enumerate()
1613            .rev()
1614            .find_map(|(scope_idx, scope)| {
1615                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1616            })
1617            .unwrap_or((0, ROOT_NODE));
1618
1619        if uncached_scope > target {
1620            return cached_drop;
1621        }
1622
1623        let is_coroutine = self.coroutine.is_some();
1624        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1625            for drop in &scope.drops {
1626                if is_coroutine || drop.kind == DropKind::Value {
1627                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1628                }
1629            }
1630            scope.cached_unwind_block = Some(cached_drop);
1631        }
1632
1633        cached_drop
1634    }
1635
1636    /// Prepares to create a path that performs all required cleanup for a
1637    /// terminator that can unwind at the given basic block.
1638    ///
1639    /// This path terminates in Resume. The path isn't created until after all
1640    /// of the non-unwind paths in this item have been lowered.
1641    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1642        debug_assert!(
1643            matches!(
1644                self.cfg.block_data(start).terminator().kind,
1645                TerminatorKind::Assert { .. }
1646                    | TerminatorKind::Call { .. }
1647                    | TerminatorKind::Drop { .. }
1648                    | TerminatorKind::FalseUnwind { .. }
1649                    | TerminatorKind::InlineAsm { .. }
1650            ),
1651            "diverge_from called on block with terminator that cannot unwind."
1652        );
1653
1654        let next_drop = self.diverge_cleanup();
1655        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1656    }
1657
1658    /// Returns the [DropIdx] for the innermost drop for dropline (coroutine drop path).
1659    /// The `DropIdx` will be created if it doesn't already exist.
1660    fn diverge_dropline(&mut self) -> DropIdx {
1661        // It is okay to use dummy span because the getting scope index on the topmost scope
1662        // must always succeed.
1663        self.diverge_dropline_target(self.scopes.topmost(), DUMMY_SP)
1664    }
1665
1666    /// Similar to diverge_cleanup_target, but for dropline (coroutine drop path)
1667    fn diverge_dropline_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1668        debug_assert!(
1669            self.coroutine.is_some(),
1670            "diverge_dropline_target is valid only for coroutine"
1671        );
1672        let target = self.scopes.scope_index(target_scope, span);
1673        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1674            .iter()
1675            .enumerate()
1676            .rev()
1677            .find_map(|(scope_idx, scope)| {
1678                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1679            })
1680            .unwrap_or((0, ROOT_NODE));
1681
1682        if uncached_scope > target {
1683            return cached_drop;
1684        }
1685
1686        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1687            for drop in &scope.drops {
1688                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1689            }
1690            scope.cached_coroutine_drop_block = Some(cached_drop);
1691        }
1692
1693        cached_drop
1694    }
1695
1696    /// Sets up a path that performs all required cleanup for dropping a
1697    /// coroutine, starting from the given block that ends in
1698    /// [TerminatorKind::Yield].
1699    ///
1700    /// This path terminates in CoroutineDrop.
1701    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1702        debug_assert!(
1703            matches!(
1704                self.cfg.block_data(yield_block).terminator().kind,
1705                TerminatorKind::Yield { .. }
1706            ),
1707            "coroutine_drop_cleanup called on block with non-yield terminator."
1708        );
1709        let cached_drop = self.diverge_dropline();
1710        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1711    }
1712
1713    /// Utility function for *non*-scope code to build their own drops
1714    /// Force a drop at this point in the MIR by creating a new block.
1715    pub(crate) fn build_drop_and_replace(
1716        &mut self,
1717        block: BasicBlock,
1718        span: Span,
1719        place: Place<'tcx>,
1720        value: Rvalue<'tcx>,
1721    ) -> BlockAnd<()> {
1722        let source_info = self.source_info(span);
1723
1724        // create the new block for the assignment
1725        let assign = self.cfg.start_new_block();
1726        self.cfg.push_assign(assign, source_info, place, value.clone());
1727
1728        // create the new block for the assignment in the case of unwinding
1729        let assign_unwind = self.cfg.start_new_cleanup_block();
1730        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1731
1732        self.cfg.terminate(
1733            block,
1734            source_info,
1735            TerminatorKind::Drop {
1736                place,
1737                target: assign,
1738                unwind: UnwindAction::Cleanup(assign_unwind),
1739                replace: true,
1740                drop: None,
1741                async_fut: None,
1742            },
1743        );
1744        self.diverge_from(block);
1745
1746        assign.unit()
1747    }
1748
1749    /// Creates an `Assert` terminator and return the success block.
1750    /// If the boolean condition operand is not the expected value,
1751    /// a runtime panic will be caused with the given message.
1752    pub(crate) fn assert(
1753        &mut self,
1754        block: BasicBlock,
1755        cond: Operand<'tcx>,
1756        expected: bool,
1757        msg: AssertMessage<'tcx>,
1758        span: Span,
1759    ) -> BasicBlock {
1760        let source_info = self.source_info(span);
1761        let success_block = self.cfg.start_new_block();
1762
1763        self.cfg.terminate(
1764            block,
1765            source_info,
1766            TerminatorKind::Assert {
1767                cond,
1768                expected,
1769                msg: Box::new(msg),
1770                target: success_block,
1771                unwind: UnwindAction::Continue,
1772            },
1773        );
1774        self.diverge_from(block);
1775
1776        success_block
1777    }
1778
1779    /// Unschedules any drops in the top two scopes.
1780    ///
1781    /// This is only needed for pattern-matches combining guards and or-patterns: or-patterns lead
1782    /// to guards being lowered multiple times before lowering the arm body, so we unschedle drops
1783    /// for guards' temporaries and bindings between lowering each instance of an match arm's guard.
1784    pub(crate) fn clear_match_arm_and_guard_scopes(&mut self, region_scope: region::Scope) {
1785        let [.., arm_scope, guard_scope] = &mut *self.scopes.scopes else {
1786            bug!("matches with guards should introduce separate scopes for the pattern and guard");
1787        };
1788
1789        assert_eq!(arm_scope.region_scope, region_scope);
1790        assert_eq!(guard_scope.region_scope.data, region::ScopeData::MatchGuard);
1791        assert_eq!(guard_scope.region_scope.local_id, region_scope.local_id);
1792
1793        arm_scope.drops.clear();
1794        arm_scope.invalidate_cache();
1795        guard_scope.drops.clear();
1796        guard_scope.invalidate_cache();
1797    }
1798}
1799
1800/// Builds drops for `pop_scope` and `leave_top_scope`.
1801///
1802/// # Parameters
1803///
1804/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1805/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1806/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1807/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1808///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1809///   instructions on unwinding)
1810/// * `dropline_to`, describes the drops that would occur at this point in the code if a
1811///    coroutine drop occurred.
1812/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1813/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1814fn build_scope_drops<'tcx, F>(
1815    cfg: &mut CFG<'tcx>,
1816    unwind_drops: &mut DropTree,
1817    coroutine_drops: &mut DropTree,
1818    scope: &Scope,
1819    block: BasicBlock,
1820    unwind_to: DropIdx,
1821    dropline_to: Option<DropIdx>,
1822    storage_dead_on_unwind: bool,
1823    arg_count: usize,
1824    is_async_drop: F,
1825) -> BlockAnd<()>
1826where
1827    F: Fn(Local) -> bool,
1828{
1829    debug!("build_scope_drops({:?} -> {:?}), dropline_to={:?}", block, scope, dropline_to);
1830
1831    // Build up the drops in evaluation order. The end result will
1832    // look like:
1833    //
1834    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1835    //               |                    |                 |
1836    //               :                    |                 |
1837    //                                    V                 V
1838    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1839    //
1840    // The horizontal arrows represent the execution path when the drops return
1841    // successfully. The downwards arrows represent the execution path when the
1842    // drops panic (panicking while unwinding will abort, so there's no need for
1843    // another set of arrows).
1844    //
1845    // For coroutines, we unwind from a drop on a local to its StorageDead
1846    // statement. For other functions we don't worry about StorageDead. The
1847    // drops for the unwind path should have already been generated by
1848    // `diverge_cleanup_gen`.
1849
1850    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1851    // This is a subset of what needs to be dropped when exiting the scope.
1852    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1853    // so that we can use it should a destructor panic.
1854    let mut unwind_to = unwind_to;
1855
1856    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1857    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1858    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1859    // will branch to `drops[n]`.
1860    let mut block = block;
1861
1862    // `dropline_to` indicates what needs to be dropped should coroutine drop occur.
1863    let mut dropline_to = dropline_to;
1864
1865    for drop_data in scope.drops.iter().rev() {
1866        let source_info = drop_data.source_info;
1867        let local = drop_data.local;
1868
1869        match drop_data.kind {
1870            DropKind::Value => {
1871                // `unwind_to` should drop the value that we're about to
1872                // schedule. If dropping this value panics, then we continue
1873                // with the *next* value on the unwind path.
1874                //
1875                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1876                // because `drops[n]` should unwind to `drops[n-1]`.
1877                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.local, drop_data.local);
1878                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1879                unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1880
1881                if let Some(idx) = dropline_to {
1882                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1883                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1884                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1885                }
1886
1887                // If the operand has been moved, and we are not on an unwind
1888                // path, then don't generate the drop. (We only take this into
1889                // account for non-unwind paths so as not to disturb the
1890                // caching mechanism.)
1891                if scope.moved_locals.contains(&local) {
1892                    continue;
1893                }
1894
1895                unwind_drops.add_entry_point(block, unwind_to);
1896                if let Some(to) = dropline_to
1897                    && is_async_drop(local)
1898                {
1899                    coroutine_drops.add_entry_point(block, to);
1900                }
1901
1902                let next = cfg.start_new_block();
1903                cfg.terminate(
1904                    block,
1905                    source_info,
1906                    TerminatorKind::Drop {
1907                        place: local.into(),
1908                        target: next,
1909                        unwind: UnwindAction::Continue,
1910                        replace: false,
1911                        drop: None,
1912                        async_fut: None,
1913                    },
1914                );
1915                block = next;
1916            }
1917            DropKind::ForLint(reason) => {
1918                // As in the `DropKind::Storage` case below:
1919                // normally lint-related drops are not emitted for unwind,
1920                // so we can just leave `unwind_to` unmodified, but in some
1921                // cases we emit things ALSO on the unwind path, so we need to adjust
1922                // `unwind_to` in that case.
1923                if storage_dead_on_unwind {
1924                    debug_assert_eq!(
1925                        unwind_drops.drop_nodes[unwind_to].data.local,
1926                        drop_data.local
1927                    );
1928                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1929                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1930                }
1931
1932                // If the operand has been moved, and we are not on an unwind
1933                // path, then don't generate the drop. (We only take this into
1934                // account for non-unwind paths so as not to disturb the
1935                // caching mechanism.)
1936                if scope.moved_locals.contains(&local) {
1937                    continue;
1938                }
1939
1940                cfg.push(
1941                    block,
1942                    Statement::new(
1943                        source_info,
1944                        StatementKind::BackwardIncompatibleDropHint {
1945                            place: Box::new(local.into()),
1946                            reason,
1947                        },
1948                    ),
1949                );
1950            }
1951            DropKind::Storage => {
1952                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1953                // need to adjust `unwind_to` on this path. However, in some specific cases
1954                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1955                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1956                // so that any future drops we emit will not register storage-dead.
1957                if storage_dead_on_unwind {
1958                    debug_assert_eq!(
1959                        unwind_drops.drop_nodes[unwind_to].data.local,
1960                        drop_data.local
1961                    );
1962                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1963                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1964                }
1965                if let Some(idx) = dropline_to {
1966                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1967                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1968                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1969                }
1970                // Only temps and vars need their storage dead.
1971                assert!(local.index() > arg_count);
1972                cfg.push(block, Statement::new(source_info, StatementKind::StorageDead(local)));
1973            }
1974        }
1975    }
1976    block.unit()
1977}
1978
1979impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1980    /// Build a drop tree for a breakable scope.
1981    ///
1982    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1983    /// loop. Otherwise this is for `break` or `return`.
1984    fn build_exit_tree(
1985        &mut self,
1986        mut drops: DropTree,
1987        else_scope: region::Scope,
1988        span: Span,
1989        continue_block: Option<BasicBlock>,
1990    ) -> Option<BlockAnd<()>> {
1991        let blocks = drops.build_mir::<ExitScopes>(&mut self.cfg, continue_block);
1992        let is_coroutine = self.coroutine.is_some();
1993
1994        // Link the exit drop tree to unwind drop tree.
1995        if drops.drop_nodes.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1996            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1997            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1998            for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated().skip(1) {
1999                match drop_node.data.kind {
2000                    DropKind::Storage | DropKind::ForLint(_) => {
2001                        if is_coroutine {
2002                            let unwind_drop = self
2003                                .scopes
2004                                .unwind_drops
2005                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
2006                            unwind_indices.push(unwind_drop);
2007                        } else {
2008                            unwind_indices.push(unwind_indices[drop_node.next]);
2009                        }
2010                    }
2011                    DropKind::Value => {
2012                        let unwind_drop = self
2013                            .scopes
2014                            .unwind_drops
2015                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
2016                        self.scopes.unwind_drops.add_entry_point(
2017                            blocks[drop_idx].unwrap(),
2018                            unwind_indices[drop_node.next],
2019                        );
2020                        unwind_indices.push(unwind_drop);
2021                    }
2022                }
2023            }
2024        }
2025        // Link the exit drop tree to dropline drop tree (coroutine drop path) for async drops
2026        if is_coroutine
2027            && drops.drop_nodes.iter().any(|DropNode { data, next: _ }| {
2028                data.kind == DropKind::Value && self.is_async_drop(data.local)
2029            })
2030        {
2031            let dropline_target = self.diverge_dropline_target(else_scope, span);
2032            let mut dropline_indices = IndexVec::from_elem_n(dropline_target, 1);
2033            for (drop_idx, drop_data) in drops.drop_nodes.iter_enumerated().skip(1) {
2034                let coroutine_drop = self
2035                    .scopes
2036                    .coroutine_drops
2037                    .add_drop(drop_data.data, dropline_indices[drop_data.next]);
2038                match drop_data.data.kind {
2039                    DropKind::Storage | DropKind::ForLint(_) => {}
2040                    DropKind::Value => {
2041                        if self.is_async_drop(drop_data.data.local) {
2042                            self.scopes.coroutine_drops.add_entry_point(
2043                                blocks[drop_idx].unwrap(),
2044                                dropline_indices[drop_data.next],
2045                            );
2046                        }
2047                    }
2048                }
2049                dropline_indices.push(coroutine_drop);
2050            }
2051        }
2052        blocks[ROOT_NODE].map(BasicBlock::unit)
2053    }
2054
2055    /// Build the unwind and coroutine drop trees.
2056    pub(crate) fn build_drop_trees(&mut self) {
2057        if self.coroutine.is_some() {
2058            self.build_coroutine_drop_trees();
2059        } else {
2060            Self::build_unwind_tree(
2061                &mut self.cfg,
2062                &mut self.scopes.unwind_drops,
2063                self.fn_span,
2064                &mut None,
2065            );
2066        }
2067    }
2068
2069    fn build_coroutine_drop_trees(&mut self) {
2070        // Build the drop tree for dropping the coroutine while it's suspended.
2071        let drops = &mut self.scopes.coroutine_drops;
2072        let cfg = &mut self.cfg;
2073        let fn_span = self.fn_span;
2074        let blocks = drops.build_mir::<CoroutineDrop>(cfg, None);
2075        if let Some(root_block) = blocks[ROOT_NODE] {
2076            cfg.terminate(
2077                root_block,
2078                SourceInfo::outermost(fn_span),
2079                TerminatorKind::CoroutineDrop,
2080            );
2081        }
2082
2083        // Build the drop tree for unwinding in the normal control flow paths.
2084        let resume_block = &mut None;
2085        let unwind_drops = &mut self.scopes.unwind_drops;
2086        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
2087
2088        // Build the drop tree for unwinding when dropping a suspended
2089        // coroutine.
2090        //
2091        // This is a different tree to the standard unwind paths here to
2092        // prevent drop elaboration from creating drop flags that would have
2093        // to be captured by the coroutine. I'm not sure how important this
2094        // optimization is, but it is here.
2095        for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated() {
2096            if let DropKind::Value = drop_node.data.kind
2097                && let Some(bb) = blocks[drop_idx]
2098            {
2099                debug_assert!(drop_node.next < drops.drop_nodes.next_index());
2100                drops.entry_points.push((drop_node.next, bb));
2101            }
2102        }
2103        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
2104    }
2105
2106    fn build_unwind_tree(
2107        cfg: &mut CFG<'tcx>,
2108        drops: &mut DropTree,
2109        fn_span: Span,
2110        resume_block: &mut Option<BasicBlock>,
2111    ) {
2112        let blocks = drops.build_mir::<Unwind>(cfg, *resume_block);
2113        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
2114            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
2115
2116            *resume_block = blocks[ROOT_NODE];
2117        }
2118    }
2119}
2120
2121// DropTreeBuilder implementations.
2122
2123struct ExitScopes;
2124
2125impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
2126    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2127        cfg.start_new_block()
2128    }
2129    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2130        // There should be an existing terminator with real source info and a
2131        // dummy TerminatorKind. Replace it with a proper goto.
2132        // (The dummy is added by `break_scope` and `break_for_else`.)
2133        let term = cfg.block_data_mut(from).terminator_mut();
2134        if let TerminatorKind::UnwindResume = term.kind {
2135            term.kind = TerminatorKind::Goto { target: to };
2136        } else {
2137            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
2138        }
2139    }
2140}
2141
2142struct CoroutineDrop;
2143
2144impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
2145    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2146        cfg.start_new_block()
2147    }
2148    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2149        let term = cfg.block_data_mut(from).terminator_mut();
2150        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
2151            *drop = Some(to);
2152        } else if let TerminatorKind::Drop { ref mut drop, .. } = term.kind {
2153            *drop = Some(to);
2154        } else {
2155            span_bug!(
2156                term.source_info.span,
2157                "cannot enter coroutine drop tree from {:?}",
2158                term.kind
2159            )
2160        }
2161    }
2162}
2163
2164struct Unwind;
2165
2166impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
2167    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2168        cfg.start_new_cleanup_block()
2169    }
2170    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2171        let term = &mut cfg.block_data_mut(from).terminator_mut();
2172        match &mut term.kind {
2173            TerminatorKind::Drop { unwind, .. } => {
2174                if let UnwindAction::Cleanup(unwind) = *unwind {
2175                    let source_info = term.source_info;
2176                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
2177                } else {
2178                    *unwind = UnwindAction::Cleanup(to);
2179                }
2180            }
2181            TerminatorKind::FalseUnwind { unwind, .. }
2182            | TerminatorKind::Call { unwind, .. }
2183            | TerminatorKind::Assert { unwind, .. }
2184            | TerminatorKind::InlineAsm { unwind, .. } => {
2185                *unwind = UnwindAction::Cleanup(to);
2186            }
2187            TerminatorKind::Goto { .. }
2188            | TerminatorKind::SwitchInt { .. }
2189            | TerminatorKind::UnwindResume
2190            | TerminatorKind::UnwindTerminate(_)
2191            | TerminatorKind::Return
2192            | TerminatorKind::TailCall { .. }
2193            | TerminatorKind::Unreachable
2194            | TerminatorKind::Yield { .. }
2195            | TerminatorKind::CoroutineDrop
2196            | TerminatorKind::FalseEdge { .. } => {
2197                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
2198            }
2199        }
2200    }
2201}