rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use rustc_data_structures::fx::FxHashMap;
87use rustc_hir::HirId;
88use rustc_index::{IndexSlice, IndexVec};
89use rustc_middle::middle::region;
90use rustc_middle::mir::*;
91use rustc_middle::thir::{ExprId, LintLevel};
92use rustc_middle::{bug, span_bug};
93use rustc_session::lint::Level;
94use rustc_span::source_map::Spanned;
95use rustc_span::{DUMMY_SP, Span};
96use tracing::{debug, instrument};
97
98use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
99
100#[derive(Debug)]
101pub(crate) struct Scopes<'tcx> {
102    scopes: Vec<Scope>,
103
104    /// The current set of breakable scopes. See module comment for more details.
105    breakable_scopes: Vec<BreakableScope<'tcx>>,
106
107    /// The scope of the innermost if-then currently being lowered.
108    if_then_scope: Option<IfThenScope>,
109
110    /// Drops that need to be done on unwind paths. See the comment on
111    /// [DropTree] for more details.
112    unwind_drops: DropTree,
113
114    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
115    coroutine_drops: DropTree,
116}
117
118#[derive(Debug)]
119struct Scope {
120    /// The source scope this scope was created in.
121    source_scope: SourceScope,
122
123    /// the region span of this scope within source code.
124    region_scope: region::Scope,
125
126    /// set of places to drop when exiting this scope. This starts
127    /// out empty but grows as variables are declared during the
128    /// building process. This is a stack, so we always drop from the
129    /// end of the vector (top of the stack) first.
130    drops: Vec<DropData>,
131
132    moved_locals: Vec<Local>,
133
134    /// The drop index that will drop everything in and below this scope on an
135    /// unwind path.
136    cached_unwind_block: Option<DropIdx>,
137
138    /// The drop index that will drop everything in and below this scope on a
139    /// coroutine drop path.
140    cached_coroutine_drop_block: Option<DropIdx>,
141}
142
143#[derive(Clone, Copy, Debug)]
144struct DropData {
145    /// The `Span` where drop obligation was incurred (typically where place was
146    /// declared)
147    source_info: SourceInfo,
148
149    /// local to drop
150    local: Local,
151
152    /// Whether this is a value Drop or a StorageDead.
153    kind: DropKind,
154}
155
156#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
157pub(crate) enum DropKind {
158    Value,
159    Storage,
160    ForLint,
161}
162
163#[derive(Debug)]
164struct BreakableScope<'tcx> {
165    /// Region scope of the loop
166    region_scope: region::Scope,
167    /// The destination of the loop/block expression itself (i.e., where to put
168    /// the result of a `break` or `return` expression)
169    break_destination: Place<'tcx>,
170    /// Drops that happen on the `break`/`return` path.
171    break_drops: DropTree,
172    /// Drops that happen on the `continue` path.
173    continue_drops: Option<DropTree>,
174}
175
176#[derive(Debug)]
177struct IfThenScope {
178    /// The if-then scope or arm scope
179    region_scope: region::Scope,
180    /// Drops that happen on the `else` path.
181    else_drops: DropTree,
182}
183
184/// The target of an expression that breaks out of a scope
185#[derive(Clone, Copy, Debug)]
186pub(crate) enum BreakableTarget {
187    Continue(region::Scope),
188    Break(region::Scope),
189    Return,
190}
191
192rustc_index::newtype_index! {
193    #[orderable]
194    struct DropIdx {}
195}
196
197const ROOT_NODE: DropIdx = DropIdx::ZERO;
198
199/// A tree of drops that we have deferred lowering. It's used for:
200///
201/// * Drops on unwind paths
202/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
203/// * Drops on return and loop exit paths
204/// * Drops on the else path in an `if let` chain
205///
206/// Once no more nodes could be added to the tree, we lower it to MIR in one go
207/// in `build_mir`.
208#[derive(Debug)]
209struct DropTree {
210    /// Nodes in the drop tree, containing drop data and a link to the next node.
211    drops: IndexVec<DropIdx, DropNode>,
212    /// Map for finding the index of an existing node, given its contents.
213    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
214    /// Edges into the `DropTree` that need to be added once it's lowered.
215    entry_points: Vec<(DropIdx, BasicBlock)>,
216}
217
218/// A single node in the drop tree.
219#[derive(Debug)]
220struct DropNode {
221    /// Info about the drop to be performed at this node in the drop tree.
222    data: DropData,
223    /// Index of the "next" drop to perform (in drop order, not declaration order).
224    next: DropIdx,
225}
226
227/// Subset of [`DropNode`] used for reverse lookup in a hash table.
228#[derive(Debug, PartialEq, Eq, Hash)]
229struct DropNodeKey {
230    next: DropIdx,
231    local: Local,
232    kind: DropKind,
233}
234
235impl Scope {
236    /// Whether there's anything to do for the cleanup path, that is,
237    /// when unwinding through this scope. This includes destructors,
238    /// but not StorageDead statements, which don't get emitted at all
239    /// for unwinding, for several reasons:
240    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
241    ///  * LLVM's memory dependency analysis can't handle it atm
242    ///  * polluting the cleanup MIR with StorageDead creates
243    ///    landing pads even though there's no actual destructors
244    ///  * freeing up stack space has no effect during unwinding
245    /// Note that for coroutines we do emit StorageDeads, for the
246    /// use of optimizations in the MIR coroutine transform.
247    fn needs_cleanup(&self) -> bool {
248        self.drops.iter().any(|drop| match drop.kind {
249            DropKind::Value | DropKind::ForLint => true,
250            DropKind::Storage => false,
251        })
252    }
253
254    fn invalidate_cache(&mut self) {
255        self.cached_unwind_block = None;
256        self.cached_coroutine_drop_block = None;
257    }
258}
259
260/// A trait that determined how [DropTree] creates its blocks and
261/// links to any entry nodes.
262trait DropTreeBuilder<'tcx> {
263    /// Create a new block for the tree. This should call either
264    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
265    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
266
267    /// Links a block outside the drop tree, `from`, to the block `to` inside
268    /// the drop tree.
269    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
270}
271
272impl DropTree {
273    fn new() -> Self {
274        // The root node of the tree doesn't represent a drop, but instead
275        // represents the block in the tree that should be jumped to once all
276        // of the required drops have been performed.
277        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
278        let fake_data =
279            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
280        let drops = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
281        Self { drops, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
282    }
283
284    /// Adds a node to the drop tree, consisting of drop data and the index of
285    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
286    ///
287    /// If there is already an equivalent node in the tree, nothing is added, and
288    /// that node's index is returned. Otherwise, the new node's index is returned.
289    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
290        let drops = &mut self.drops;
291        *self
292            .existing_drops_map
293            .entry(DropNodeKey { next, local: data.local, kind: data.kind })
294            // Create a new node, and also add its index to the map.
295            .or_insert_with(|| drops.push(DropNode { data, next }))
296    }
297
298    /// Registers `from` as an entry point to this drop tree, at `to`.
299    ///
300    /// During [`Self::build_mir`], `from` will be linked to the corresponding
301    /// block within the drop tree.
302    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
303        debug_assert!(to < self.drops.next_index());
304        self.entry_points.push((to, from));
305    }
306
307    /// Builds the MIR for a given drop tree.
308    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
309        &mut self,
310        cfg: &mut CFG<'tcx>,
311        root_node: Option<BasicBlock>,
312    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
313        debug!("DropTree::build_mir(drops = {:#?})", self);
314
315        let mut blocks = self.assign_blocks::<T>(cfg, root_node);
316        self.link_blocks(cfg, &mut blocks);
317
318        blocks
319    }
320
321    /// Assign blocks for all of the drops in the drop tree that need them.
322    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
323        &mut self,
324        cfg: &mut CFG<'tcx>,
325        root_node: Option<BasicBlock>,
326    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
327        // StorageDead statements can share blocks with each other and also with
328        // a Drop terminator. We iterate through the drops to find which drops
329        // need their own block.
330        #[derive(Clone, Copy)]
331        enum Block {
332            // This drop is unreachable
333            None,
334            // This drop is only reachable through the `StorageDead` with the
335            // specified index.
336            Shares(DropIdx),
337            // This drop has more than one way of being reached, or it is
338            // branched to from outside the tree, or its predecessor is a
339            // `Value` drop.
340            Own,
341        }
342
343        let mut blocks = IndexVec::from_elem(None, &self.drops);
344        blocks[ROOT_NODE] = root_node;
345
346        let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
347        if root_node.is_some() {
348            // In some cases (such as drops for `continue`) the root node
349            // already has a block. In this case, make sure that we don't
350            // override it.
351            needs_block[ROOT_NODE] = Block::Own;
352        }
353
354        // Sort so that we only need to check the last value.
355        let entry_points = &mut self.entry_points;
356        entry_points.sort();
357
358        for (drop_idx, drop_node) in self.drops.iter_enumerated().rev() {
359            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
360                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
361                needs_block[drop_idx] = Block::Own;
362                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
363                    let entry_block = entry_points.pop().unwrap().1;
364                    T::link_entry_point(cfg, entry_block, block);
365                }
366            }
367            match needs_block[drop_idx] {
368                Block::None => continue,
369                Block::Own => {
370                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
371                }
372                Block::Shares(pred) => {
373                    blocks[drop_idx] = blocks[pred];
374                }
375            }
376            if let DropKind::Value = drop_node.data.kind {
377                needs_block[drop_node.next] = Block::Own;
378            } else if drop_idx != ROOT_NODE {
379                match &mut needs_block[drop_node.next] {
380                    pred @ Block::None => *pred = Block::Shares(drop_idx),
381                    pred @ Block::Shares(_) => *pred = Block::Own,
382                    Block::Own => (),
383                }
384            }
385        }
386
387        debug!("assign_blocks: blocks = {:#?}", blocks);
388        assert!(entry_points.is_empty());
389
390        blocks
391    }
392
393    fn link_blocks<'tcx>(
394        &self,
395        cfg: &mut CFG<'tcx>,
396        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
397    ) {
398        for (drop_idx, drop_node) in self.drops.iter_enumerated().rev() {
399            let Some(block) = blocks[drop_idx] else { continue };
400            match drop_node.data.kind {
401                DropKind::Value => {
402                    let terminator = TerminatorKind::Drop {
403                        target: blocks[drop_node.next].unwrap(),
404                        // The caller will handle this if needed.
405                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
406                        place: drop_node.data.local.into(),
407                        replace: false,
408                    };
409                    cfg.terminate(block, drop_node.data.source_info, terminator);
410                }
411                DropKind::ForLint => {
412                    let stmt = Statement {
413                        source_info: drop_node.data.source_info,
414                        kind: StatementKind::BackwardIncompatibleDropHint {
415                            place: Box::new(drop_node.data.local.into()),
416                            reason: BackwardIncompatibleDropReason::Edition2024,
417                        },
418                    };
419                    cfg.push(block, stmt);
420                    let target = blocks[drop_node.next].unwrap();
421                    if target != block {
422                        // Diagnostics don't use this `Span` but debuginfo
423                        // might. Since we don't want breakpoints to be placed
424                        // here, especially when this is on an unwind path, we
425                        // use `DUMMY_SP`.
426                        let source_info =
427                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
428                        let terminator = TerminatorKind::Goto { target };
429                        cfg.terminate(block, source_info, terminator);
430                    }
431                }
432                // Root nodes don't correspond to a drop.
433                DropKind::Storage if drop_idx == ROOT_NODE => {}
434                DropKind::Storage => {
435                    let stmt = Statement {
436                        source_info: drop_node.data.source_info,
437                        kind: StatementKind::StorageDead(drop_node.data.local),
438                    };
439                    cfg.push(block, stmt);
440                    let target = blocks[drop_node.next].unwrap();
441                    if target != block {
442                        // Diagnostics don't use this `Span` but debuginfo
443                        // might. Since we don't want breakpoints to be placed
444                        // here, especially when this is on an unwind path, we
445                        // use `DUMMY_SP`.
446                        let source_info =
447                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
448                        let terminator = TerminatorKind::Goto { target };
449                        cfg.terminate(block, source_info, terminator);
450                    }
451                }
452            }
453        }
454    }
455}
456
457impl<'tcx> Scopes<'tcx> {
458    pub(crate) fn new() -> Self {
459        Self {
460            scopes: Vec::new(),
461            breakable_scopes: Vec::new(),
462            if_then_scope: None,
463            unwind_drops: DropTree::new(),
464            coroutine_drops: DropTree::new(),
465        }
466    }
467
468    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
469        debug!("push_scope({:?})", region_scope);
470        self.scopes.push(Scope {
471            source_scope: vis_scope,
472            region_scope: region_scope.0,
473            drops: vec![],
474            moved_locals: vec![],
475            cached_unwind_block: None,
476            cached_coroutine_drop_block: None,
477        });
478    }
479
480    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
481        let scope = self.scopes.pop().unwrap();
482        assert_eq!(scope.region_scope, region_scope.0);
483        scope
484    }
485
486    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
487        self.scopes
488            .iter()
489            .rposition(|scope| scope.region_scope == region_scope)
490            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
491    }
492
493    /// Returns the topmost active scope, which is known to be alive until
494    /// the next scope expression.
495    fn topmost(&self) -> region::Scope {
496        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
497    }
498}
499
500impl<'a, 'tcx> Builder<'a, 'tcx> {
501    // Adding and removing scopes
502    // ==========================
503
504    ///  Start a breakable scope, which tracks where `continue`, `break` and
505    ///  `return` should branch to.
506    pub(crate) fn in_breakable_scope<F>(
507        &mut self,
508        loop_block: Option<BasicBlock>,
509        break_destination: Place<'tcx>,
510        span: Span,
511        f: F,
512    ) -> BlockAnd<()>
513    where
514        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
515    {
516        let region_scope = self.scopes.topmost();
517        let scope = BreakableScope {
518            region_scope,
519            break_destination,
520            break_drops: DropTree::new(),
521            continue_drops: loop_block.map(|_| DropTree::new()),
522        };
523        self.scopes.breakable_scopes.push(scope);
524        let normal_exit_block = f(self);
525        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
526        assert!(breakable_scope.region_scope == region_scope);
527        let break_block =
528            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
529        if let Some(drops) = breakable_scope.continue_drops {
530            self.build_exit_tree(drops, region_scope, span, loop_block);
531        }
532        match (normal_exit_block, break_block) {
533            (Some(block), None) | (None, Some(block)) => block,
534            (None, None) => self.cfg.start_new_block().unit(),
535            (Some(normal_block), Some(exit_block)) => {
536                let target = self.cfg.start_new_block();
537                let source_info = self.source_info(span);
538                self.cfg.terminate(
539                    normal_block.into_block(),
540                    source_info,
541                    TerminatorKind::Goto { target },
542                );
543                self.cfg.terminate(
544                    exit_block.into_block(),
545                    source_info,
546                    TerminatorKind::Goto { target },
547                );
548                target.unit()
549            }
550        }
551    }
552
553    /// Start an if-then scope which tracks drop for `if` expressions and `if`
554    /// guards.
555    ///
556    /// For an if-let chain:
557    ///
558    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
559    ///
560    /// There are three possible ways the condition can be false and we may have
561    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
562    /// To handle this correctly we use a `DropTree` in a similar way to a
563    /// `loop` expression and 'break' out on all of the 'else' paths.
564    ///
565    /// Notes:
566    /// - We don't need to keep a stack of scopes in the `Builder` because the
567    ///   'else' paths will only leave the innermost scope.
568    /// - This is also used for match guards.
569    pub(crate) fn in_if_then_scope<F>(
570        &mut self,
571        region_scope: region::Scope,
572        span: Span,
573        f: F,
574    ) -> (BasicBlock, BasicBlock)
575    where
576        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
577    {
578        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
579        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
580
581        let then_block = f(self).into_block();
582
583        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
584        assert!(if_then_scope.region_scope == region_scope);
585
586        let else_block =
587            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
588                || self.cfg.start_new_block(),
589                |else_block_and| else_block_and.into_block(),
590            );
591
592        (then_block, else_block)
593    }
594
595    /// Convenience wrapper that pushes a scope and then executes `f`
596    /// to build its contents, popping the scope afterwards.
597    #[instrument(skip(self, f), level = "debug")]
598    pub(crate) fn in_scope<F, R>(
599        &mut self,
600        region_scope: (region::Scope, SourceInfo),
601        lint_level: LintLevel,
602        f: F,
603    ) -> BlockAnd<R>
604    where
605        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
606    {
607        let source_scope = self.source_scope;
608        if let LintLevel::Explicit(current_hir_id) = lint_level {
609            let parent_id =
610                self.source_scopes[source_scope].local_data.as_ref().unwrap_crate_local().lint_root;
611            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
612        }
613        self.push_scope(region_scope);
614        let mut block;
615        let rv = unpack!(block = f(self));
616        block = self.pop_scope(region_scope, block).into_block();
617        self.source_scope = source_scope;
618        debug!(?block);
619        block.and(rv)
620    }
621
622    /// Push a scope onto the stack. You can then build code in this
623    /// scope and call `pop_scope` afterwards. Note that these two
624    /// calls must be paired; using `in_scope` as a convenience
625    /// wrapper maybe preferable.
626    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
627        self.scopes.push_scope(region_scope, self.source_scope);
628    }
629
630    /// Pops a scope, which should have region scope `region_scope`,
631    /// adding any drops onto the end of `block` that are needed.
632    /// This must match 1-to-1 with `push_scope`.
633    pub(crate) fn pop_scope(
634        &mut self,
635        region_scope: (region::Scope, SourceInfo),
636        mut block: BasicBlock,
637    ) -> BlockAnd<()> {
638        debug!("pop_scope({:?}, {:?})", region_scope, block);
639
640        block = self.leave_top_scope(block);
641
642        self.scopes.pop_scope(region_scope);
643
644        block.unit()
645    }
646
647    /// Sets up the drops for breaking from `block` to `target`.
648    pub(crate) fn break_scope(
649        &mut self,
650        mut block: BasicBlock,
651        value: Option<ExprId>,
652        target: BreakableTarget,
653        source_info: SourceInfo,
654    ) -> BlockAnd<()> {
655        let span = source_info.span;
656
657        let get_scope_index = |scope: region::Scope| {
658            // find the loop-scope by its `region::Scope`.
659            self.scopes
660                .breakable_scopes
661                .iter()
662                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
663                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
664        };
665        let (break_index, destination) = match target {
666            BreakableTarget::Return => {
667                let scope = &self.scopes.breakable_scopes[0];
668                if scope.break_destination != Place::return_place() {
669                    span_bug!(span, "`return` in item with no return scope");
670                }
671                (0, Some(scope.break_destination))
672            }
673            BreakableTarget::Break(scope) => {
674                let break_index = get_scope_index(scope);
675                let scope = &self.scopes.breakable_scopes[break_index];
676                (break_index, Some(scope.break_destination))
677            }
678            BreakableTarget::Continue(scope) => {
679                let break_index = get_scope_index(scope);
680                (break_index, None)
681            }
682        };
683
684        match (destination, value) {
685            (Some(destination), Some(value)) => {
686                debug!("stmt_expr Break val block_context.push(SubExpr)");
687                self.block_context.push(BlockFrame::SubExpr);
688                block = self.expr_into_dest(destination, block, value).into_block();
689                self.block_context.pop();
690            }
691            (Some(destination), None) => {
692                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
693            }
694            (None, Some(_)) => {
695                panic!("`return`, `become` and `break` with value and must have a destination")
696            }
697            (None, None) => {
698                if self.tcx.sess.instrument_coverage() {
699                    // Normally we wouldn't build any MIR in this case, but that makes it
700                    // harder for coverage instrumentation to extract a relevant span for
701                    // `continue` expressions. So here we inject a dummy statement with the
702                    // desired span.
703                    self.cfg.push_coverage_span_marker(block, source_info);
704                }
705            }
706        }
707
708        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
709        let scope_index = self.scopes.scope_index(region_scope, span);
710        let drops = if destination.is_some() {
711            &mut self.scopes.breakable_scopes[break_index].break_drops
712        } else {
713            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
714            else {
715                self.tcx.dcx().span_delayed_bug(
716                    source_info.span,
717                    "unlabelled `continue` within labelled block",
718                );
719                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
720
721                return self.cfg.start_new_block().unit();
722            };
723            drops
724        };
725
726        let drop_idx = self.scopes.scopes[scope_index + 1..]
727            .iter()
728            .flat_map(|scope| &scope.drops)
729            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
730
731        drops.add_entry_point(block, drop_idx);
732
733        // `build_drop_trees` doesn't have access to our source_info, so we
734        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
735        // because MIR type checking will panic if it hasn't been overwritten.
736        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
737        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
738
739        self.cfg.start_new_block().unit()
740    }
741
742    /// Sets up the drops for breaking from `block` due to an `if` condition
743    /// that turned out to be false.
744    ///
745    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
746    /// there is an if-then scope to tell us what the target scope is.
747    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
748        let if_then_scope = self
749            .scopes
750            .if_then_scope
751            .as_ref()
752            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
753
754        let target = if_then_scope.region_scope;
755        let scope_index = self.scopes.scope_index(target, source_info.span);
756
757        // Upgrade `if_then_scope` to `&mut`.
758        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
759
760        let mut drop_idx = ROOT_NODE;
761        let drops = &mut if_then_scope.else_drops;
762        for scope in &self.scopes.scopes[scope_index + 1..] {
763            for drop in &scope.drops {
764                drop_idx = drops.add_drop(*drop, drop_idx);
765            }
766        }
767        drops.add_entry_point(block, drop_idx);
768
769        // `build_drop_trees` doesn't have access to our source_info, so we
770        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
771        // because MIR type checking will panic if it hasn't been overwritten.
772        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
773        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
774    }
775
776    /// Sets up the drops for explicit tail calls.
777    ///
778    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
779    /// Instead, all scheduled drops are immediately added to the CFG.
780    pub(crate) fn break_for_tail_call(
781        &mut self,
782        mut block: BasicBlock,
783        args: &[Spanned<Operand<'tcx>>],
784        source_info: SourceInfo,
785    ) -> BlockAnd<()> {
786        let arg_drops: Vec<_> = args
787            .iter()
788            .rev()
789            .filter_map(|arg| match &arg.node {
790                Operand::Copy(_) => bug!("copy op in tail call args"),
791                Operand::Move(place) => {
792                    let local =
793                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
794
795                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
796                        return None;
797                    }
798
799                    Some(DropData { source_info, local, kind: DropKind::Value })
800                }
801                Operand::Constant(_) => None,
802            })
803            .collect();
804
805        let mut unwind_to = self.diverge_cleanup_target(
806            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
807            DUMMY_SP,
808        );
809        let typing_env = self.typing_env();
810        let unwind_drops = &mut self.scopes.unwind_drops;
811
812        // the innermost scope contains only the destructors for the tail call arguments
813        // we only want to drop these in case of a panic, so we skip it
814        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
815            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
816            for drop_data in scope.drops.iter().rev() {
817                let source_info = drop_data.source_info;
818                let local = drop_data.local;
819
820                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
821                    continue;
822                }
823
824                match drop_data.kind {
825                    DropKind::Value => {
826                        // `unwind_to` should drop the value that we're about to
827                        // schedule. If dropping this value panics, then we continue
828                        // with the *next* value on the unwind path.
829                        debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
830                        debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
831                        unwind_to = unwind_drops.drops[unwind_to].next;
832
833                        let mut unwind_entry_point = unwind_to;
834
835                        // the tail call arguments must be dropped if any of these drops panic
836                        for drop in arg_drops.iter().copied() {
837                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
838                        }
839
840                        unwind_drops.add_entry_point(block, unwind_entry_point);
841
842                        let next = self.cfg.start_new_block();
843                        self.cfg.terminate(
844                            block,
845                            source_info,
846                            TerminatorKind::Drop {
847                                place: local.into(),
848                                target: next,
849                                unwind: UnwindAction::Continue,
850                                replace: false,
851                            },
852                        );
853                        block = next;
854                    }
855                    DropKind::ForLint => {
856                        self.cfg.push(
857                            block,
858                            Statement {
859                                source_info,
860                                kind: StatementKind::BackwardIncompatibleDropHint {
861                                    place: Box::new(local.into()),
862                                    reason: BackwardIncompatibleDropReason::Edition2024,
863                                },
864                            },
865                        );
866                    }
867                    DropKind::Storage => {
868                        // Only temps and vars need their storage dead.
869                        assert!(local.index() > self.arg_count);
870                        self.cfg.push(
871                            block,
872                            Statement { source_info, kind: StatementKind::StorageDead(local) },
873                        );
874                    }
875                }
876            }
877        }
878
879        block.unit()
880    }
881
882    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
883        // If we are emitting a `drop` statement, we need to have the cached
884        // diverge cleanup pads ready in case that drop panics.
885        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
886        let is_coroutine = self.coroutine.is_some();
887        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
888
889        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
890        build_scope_drops(
891            &mut self.cfg,
892            &mut self.scopes.unwind_drops,
893            scope,
894            block,
895            unwind_to,
896            is_coroutine && needs_cleanup,
897            self.arg_count,
898        )
899        .into_block()
900    }
901
902    /// Possibly creates a new source scope if `current_root` and `parent_root`
903    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
904    pub(crate) fn maybe_new_source_scope(
905        &mut self,
906        span: Span,
907        current_id: HirId,
908        parent_id: HirId,
909    ) {
910        let (current_root, parent_root) =
911            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
912                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
913                // the only part of rustc that tracks MIR -> HIR is the
914                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
915                // locations. Normally the number of source scopes is limited to the set of nodes
916                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
917                // behavior to maximize the number of source scopes, increasing the granularity of
918                // the MIR->HIR mapping.
919                (current_id, parent_id)
920            } else {
921                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
922                // parents. We estimate the true lint roots here to avoid creating a lot of source
923                // scopes.
924                (
925                    self.maybe_lint_level_root_bounded(current_id),
926                    if parent_id == self.hir_id {
927                        parent_id // this is very common
928                    } else {
929                        self.maybe_lint_level_root_bounded(parent_id)
930                    },
931                )
932            };
933
934        if current_root != parent_root {
935            let lint_level = LintLevel::Explicit(current_root);
936            self.source_scope = self.new_source_scope(span, lint_level);
937        }
938    }
939
940    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
941    /// It stops at `self.hir_id` and just returns it if reached.
942    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
943        // This assertion lets us just store `ItemLocalId` in the cache, rather
944        // than the full `HirId`.
945        assert_eq!(orig_id.owner, self.hir_id.owner);
946
947        let mut id = orig_id;
948        loop {
949            if id == self.hir_id {
950                // This is a moderately common case, mostly hit for previously unseen nodes.
951                break;
952            }
953
954            if self.tcx.hir_attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
955                // This is a rare case. It's for a node path that doesn't reach the root due to an
956                // intervening lint level attribute. This result doesn't get cached.
957                return id;
958            }
959
960            let next = self.tcx.parent_hir_id(id);
961            if next == id {
962                bug!("lint traversal reached the root of the crate");
963            }
964            id = next;
965
966            // This lookup is just an optimization; it can be removed without affecting
967            // functionality. It might seem strange to see this at the end of this loop, but the
968            // `orig_id` passed in to this function is almost always previously unseen, for which a
969            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
970            // cache lookups have a very high hit rate.
971            if self.lint_level_roots_cache.contains(id.local_id) {
972                break;
973            }
974        }
975
976        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
977        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
978        // The performance wins from the cache all come from caching non-leaf nodes.
979        self.lint_level_roots_cache.insert(orig_id.local_id);
980        self.hir_id
981    }
982
983    /// Creates a new source scope, nested in the current one.
984    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
985        let parent = self.source_scope;
986        debug!(
987            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
988            span,
989            lint_level,
990            parent,
991            self.source_scopes.get(parent)
992        );
993        let scope_local_data = SourceScopeLocalData {
994            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
995                lint_root
996            } else {
997                self.source_scopes[parent].local_data.as_ref().unwrap_crate_local().lint_root
998            },
999        };
1000        self.source_scopes.push(SourceScopeData {
1001            span,
1002            parent_scope: Some(parent),
1003            inlined: None,
1004            inlined_parent_scope: None,
1005            local_data: ClearCrossCrate::Set(scope_local_data),
1006        })
1007    }
1008
1009    /// Given a span and the current source scope, make a SourceInfo.
1010    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1011        SourceInfo { span, scope: self.source_scope }
1012    }
1013
1014    // Finding scopes
1015    // ==============
1016
1017    /// Returns the scope that we should use as the lifetime of an
1018    /// operand. Basically, an operand must live until it is consumed.
1019    /// This is similar to, but not quite the same as, the temporary
1020    /// scope (which can be larger or smaller).
1021    ///
1022    /// Consider:
1023    /// ```ignore (illustrative)
1024    /// let x = foo(bar(X, Y));
1025    /// ```
1026    /// We wish to pop the storage for X and Y after `bar()` is
1027    /// called, not after the whole `let` is completed.
1028    ///
1029    /// As another example, if the second argument diverges:
1030    /// ```ignore (illustrative)
1031    /// foo(Box::new(2), panic!())
1032    /// ```
1033    /// We would allocate the box but then free it on the unwinding
1034    /// path; we would also emit a free on the 'success' path from
1035    /// panic, but that will turn out to be removed as dead-code.
1036    pub(crate) fn local_scope(&self) -> region::Scope {
1037        self.scopes.topmost()
1038    }
1039
1040    // Scheduling drops
1041    // ================
1042
1043    pub(crate) fn schedule_drop_storage_and_value(
1044        &mut self,
1045        span: Span,
1046        region_scope: region::Scope,
1047        local: Local,
1048    ) {
1049        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1050        self.schedule_drop(span, region_scope, local, DropKind::Value);
1051    }
1052
1053    /// Indicates that `place` should be dropped on exit from `region_scope`.
1054    ///
1055    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1056    /// place, or a function parameter.
1057    pub(crate) fn schedule_drop(
1058        &mut self,
1059        span: Span,
1060        region_scope: region::Scope,
1061        local: Local,
1062        drop_kind: DropKind,
1063    ) {
1064        let needs_drop = match drop_kind {
1065            DropKind::Value | DropKind::ForLint => {
1066                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1067                    return;
1068                }
1069                true
1070            }
1071            DropKind::Storage => {
1072                if local.index() <= self.arg_count {
1073                    span_bug!(
1074                        span,
1075                        "`schedule_drop` called with body argument {:?} \
1076                        but its storage does not require a drop",
1077                        local,
1078                    )
1079                }
1080                false
1081            }
1082        };
1083
1084        // When building drops, we try to cache chains of drops to reduce the
1085        // number of `DropTree::add_drop` calls. This, however, means that
1086        // whenever we add a drop into a scope which already had some entries
1087        // in the drop tree built (and thus, cached) for it, we must invalidate
1088        // all caches which might branch into the scope which had a drop just
1089        // added to it. This is necessary, because otherwise some other code
1090        // might use the cache to branch into already built chain of drops,
1091        // essentially ignoring the newly added drop.
1092        //
1093        // For example consider there’s two scopes with a drop in each. These
1094        // are built and thus the caches are filled:
1095        //
1096        // +--------------------------------------------------------+
1097        // | +---------------------------------+                    |
1098        // | | +--------+     +-------------+  |  +---------------+ |
1099        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1100        // | | +--------+     +-------------+  |  +---------------+ |
1101        // | +------------|outer_scope cache|--+                    |
1102        // +------------------------------|middle_scope cache|------+
1103        //
1104        // Now, a new, innermost scope is added along with a new drop into
1105        // both innermost and outermost scopes:
1106        //
1107        // +------------------------------------------------------------+
1108        // | +----------------------------------+                       |
1109        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1110        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1111        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1112        // | |             +-+ +-------------+  |                       |
1113        // | +---|invalid outer_scope cache|----+                       |
1114        // +----=----------------|invalid middle_scope cache|-----------+
1115        //
1116        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1117        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1118        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1119        // wrong results.
1120        //
1121        // Note that this code iterates scopes from the innermost to the outermost,
1122        // invalidating caches of each scope visited. This way bare minimum of the
1123        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1124        // cache of outer scope stays intact.
1125        //
1126        // Since we only cache drops for the unwind path and the coroutine drop
1127        // path, we only need to invalidate the cache for drops that happen on
1128        // the unwind or coroutine drop paths. This means that for
1129        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1130        let invalidate_caches = needs_drop || self.coroutine.is_some();
1131        for scope in self.scopes.scopes.iter_mut().rev() {
1132            if invalidate_caches {
1133                scope.invalidate_cache();
1134            }
1135
1136            if scope.region_scope == region_scope {
1137                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1138                // Attribute scope exit drops to scope's closing brace.
1139                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1140
1141                scope.drops.push(DropData {
1142                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1143                    local,
1144                    kind: drop_kind,
1145                });
1146
1147                return;
1148            }
1149        }
1150
1151        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1152    }
1153
1154    /// Schedule emission of a backwards incompatible drop lint hint.
1155    /// Applicable only to temporary values for now.
1156    #[instrument(level = "debug", skip(self))]
1157    pub(crate) fn schedule_backwards_incompatible_drop(
1158        &mut self,
1159        span: Span,
1160        region_scope: region::Scope,
1161        local: Local,
1162    ) {
1163        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1164        // We need to know all of them so that we can capture potential borrow-checking errors.
1165        for scope in self.scopes.scopes.iter_mut().rev() {
1166            // Since we are inserting linting MIR statement, we have to invalidate the caches
1167            scope.invalidate_cache();
1168            if scope.region_scope == region_scope {
1169                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1170                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1171
1172                scope.drops.push(DropData {
1173                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1174                    local,
1175                    kind: DropKind::ForLint,
1176                });
1177
1178                return;
1179            }
1180        }
1181        span_bug!(
1182            span,
1183            "region scope {:?} not in scope to drop {:?} for linting",
1184            region_scope,
1185            local
1186        );
1187    }
1188
1189    /// Indicates that the "local operand" stored in `local` is
1190    /// *moved* at some point during execution (see `local_scope` for
1191    /// more information about what a "local operand" is -- in short,
1192    /// it's an intermediate operand created as part of preparing some
1193    /// MIR instruction). We use this information to suppress
1194    /// redundant drops on the non-unwind paths. This results in less
1195    /// MIR, but also avoids spurious borrow check errors
1196    /// (c.f. #64391).
1197    ///
1198    /// Example: when compiling the call to `foo` here:
1199    ///
1200    /// ```ignore (illustrative)
1201    /// foo(bar(), ...)
1202    /// ```
1203    ///
1204    /// we would evaluate `bar()` to an operand `_X`. We would also
1205    /// schedule `_X` to be dropped when the expression scope for
1206    /// `foo(bar())` is exited. This is relevant, for example, if the
1207    /// later arguments should unwind (it would ensure that `_X` gets
1208    /// dropped). However, if no unwind occurs, then `_X` will be
1209    /// unconditionally consumed by the `call`:
1210    ///
1211    /// ```ignore (illustrative)
1212    /// bb {
1213    ///   ...
1214    ///   _R = CALL(foo, _X, ...)
1215    /// }
1216    /// ```
1217    ///
1218    /// However, `_X` is still registered to be dropped, and so if we
1219    /// do nothing else, we would generate a `DROP(_X)` that occurs
1220    /// after the call. This will later be optimized out by the
1221    /// drop-elaboration code, but in the meantime it can lead to
1222    /// spurious borrow-check errors -- the problem, ironically, is
1223    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1224    /// that it creates. See #64391 for an example.
1225    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1226        let local_scope = self.local_scope();
1227        let scope = self.scopes.scopes.last_mut().unwrap();
1228
1229        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1230
1231        // look for moves of a local variable, like `MOVE(_X)`
1232        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1233            Operand::Copy(_) | Operand::Constant(_) => None,
1234            Operand::Move(place) => place.as_local(),
1235        });
1236
1237        for local in locals_moved {
1238            // check if we have a Drop for this operand and -- if so
1239            // -- add it to the list of moved operands. Note that this
1240            // local might not have been an operand created for this
1241            // call, it could come from other places too.
1242            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1243                scope.moved_locals.push(local);
1244            }
1245        }
1246    }
1247
1248    // Other
1249    // =====
1250
1251    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1252    /// this point. The `DropIdx` will be created if it doesn't already exist.
1253    fn diverge_cleanup(&mut self) -> DropIdx {
1254        // It is okay to use dummy span because the getting scope index on the topmost scope
1255        // must always succeed.
1256        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1257    }
1258
1259    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1260    /// some ancestor scope instead of the current scope.
1261    /// It is possible to unwind to some ancestor scope if some drop panics as
1262    /// the program breaks out of a if-then scope.
1263    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1264        let target = self.scopes.scope_index(target_scope, span);
1265        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1266            .iter()
1267            .enumerate()
1268            .rev()
1269            .find_map(|(scope_idx, scope)| {
1270                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1271            })
1272            .unwrap_or((0, ROOT_NODE));
1273
1274        if uncached_scope > target {
1275            return cached_drop;
1276        }
1277
1278        let is_coroutine = self.coroutine.is_some();
1279        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1280            for drop in &scope.drops {
1281                if is_coroutine || drop.kind == DropKind::Value {
1282                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1283                }
1284            }
1285            scope.cached_unwind_block = Some(cached_drop);
1286        }
1287
1288        cached_drop
1289    }
1290
1291    /// Prepares to create a path that performs all required cleanup for a
1292    /// terminator that can unwind at the given basic block.
1293    ///
1294    /// This path terminates in Resume. The path isn't created until after all
1295    /// of the non-unwind paths in this item have been lowered.
1296    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1297        debug_assert!(
1298            matches!(
1299                self.cfg.block_data(start).terminator().kind,
1300                TerminatorKind::Assert { .. }
1301                    | TerminatorKind::Call { .. }
1302                    | TerminatorKind::Drop { .. }
1303                    | TerminatorKind::FalseUnwind { .. }
1304                    | TerminatorKind::InlineAsm { .. }
1305            ),
1306            "diverge_from called on block with terminator that cannot unwind."
1307        );
1308
1309        let next_drop = self.diverge_cleanup();
1310        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1311    }
1312
1313    /// Sets up a path that performs all required cleanup for dropping a
1314    /// coroutine, starting from the given block that ends in
1315    /// [TerminatorKind::Yield].
1316    ///
1317    /// This path terminates in CoroutineDrop.
1318    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1319        debug_assert!(
1320            matches!(
1321                self.cfg.block_data(yield_block).terminator().kind,
1322                TerminatorKind::Yield { .. }
1323            ),
1324            "coroutine_drop_cleanup called on block with non-yield terminator."
1325        );
1326        let (uncached_scope, mut cached_drop) = self
1327            .scopes
1328            .scopes
1329            .iter()
1330            .enumerate()
1331            .rev()
1332            .find_map(|(scope_idx, scope)| {
1333                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1334            })
1335            .unwrap_or((0, ROOT_NODE));
1336
1337        for scope in &mut self.scopes.scopes[uncached_scope..] {
1338            for drop in &scope.drops {
1339                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1340            }
1341            scope.cached_coroutine_drop_block = Some(cached_drop);
1342        }
1343
1344        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1345    }
1346
1347    /// Utility function for *non*-scope code to build their own drops
1348    /// Force a drop at this point in the MIR by creating a new block.
1349    pub(crate) fn build_drop_and_replace(
1350        &mut self,
1351        block: BasicBlock,
1352        span: Span,
1353        place: Place<'tcx>,
1354        value: Rvalue<'tcx>,
1355    ) -> BlockAnd<()> {
1356        let source_info = self.source_info(span);
1357
1358        // create the new block for the assignment
1359        let assign = self.cfg.start_new_block();
1360        self.cfg.push_assign(assign, source_info, place, value.clone());
1361
1362        // create the new block for the assignment in the case of unwinding
1363        let assign_unwind = self.cfg.start_new_cleanup_block();
1364        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1365
1366        self.cfg.terminate(
1367            block,
1368            source_info,
1369            TerminatorKind::Drop {
1370                place,
1371                target: assign,
1372                unwind: UnwindAction::Cleanup(assign_unwind),
1373                replace: true,
1374            },
1375        );
1376        self.diverge_from(block);
1377
1378        assign.unit()
1379    }
1380
1381    /// Creates an `Assert` terminator and return the success block.
1382    /// If the boolean condition operand is not the expected value,
1383    /// a runtime panic will be caused with the given message.
1384    pub(crate) fn assert(
1385        &mut self,
1386        block: BasicBlock,
1387        cond: Operand<'tcx>,
1388        expected: bool,
1389        msg: AssertMessage<'tcx>,
1390        span: Span,
1391    ) -> BasicBlock {
1392        let source_info = self.source_info(span);
1393        let success_block = self.cfg.start_new_block();
1394
1395        self.cfg.terminate(
1396            block,
1397            source_info,
1398            TerminatorKind::Assert {
1399                cond,
1400                expected,
1401                msg: Box::new(msg),
1402                target: success_block,
1403                unwind: UnwindAction::Continue,
1404            },
1405        );
1406        self.diverge_from(block);
1407
1408        success_block
1409    }
1410
1411    /// Unschedules any drops in the top scope.
1412    ///
1413    /// This is only needed for `match` arm scopes, because they have one
1414    /// entrance per pattern, but only one exit.
1415    pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1416        let top_scope = self.scopes.scopes.last_mut().unwrap();
1417
1418        assert_eq!(top_scope.region_scope, region_scope);
1419
1420        top_scope.drops.clear();
1421        top_scope.invalidate_cache();
1422    }
1423}
1424
1425/// Builds drops for `pop_scope` and `leave_top_scope`.
1426///
1427/// # Parameters
1428///
1429/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1430/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1431/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1432/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1433///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1434///   instructions on unwinding)
1435/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1436/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1437fn build_scope_drops<'tcx>(
1438    cfg: &mut CFG<'tcx>,
1439    unwind_drops: &mut DropTree,
1440    scope: &Scope,
1441    block: BasicBlock,
1442    unwind_to: DropIdx,
1443    storage_dead_on_unwind: bool,
1444    arg_count: usize,
1445) -> BlockAnd<()> {
1446    debug!("build_scope_drops({:?} -> {:?})", block, scope);
1447
1448    // Build up the drops in evaluation order. The end result will
1449    // look like:
1450    //
1451    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1452    //               |                    |                 |
1453    //               :                    |                 |
1454    //                                    V                 V
1455    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1456    //
1457    // The horizontal arrows represent the execution path when the drops return
1458    // successfully. The downwards arrows represent the execution path when the
1459    // drops panic (panicking while unwinding will abort, so there's no need for
1460    // another set of arrows).
1461    //
1462    // For coroutines, we unwind from a drop on a local to its StorageDead
1463    // statement. For other functions we don't worry about StorageDead. The
1464    // drops for the unwind path should have already been generated by
1465    // `diverge_cleanup_gen`.
1466
1467    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1468    // This is a subset of what needs to be dropped when exiting the scope.
1469    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1470    // so that we can use it should a destructor panic.
1471    let mut unwind_to = unwind_to;
1472
1473    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1474    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1475    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1476    // will branch to `drops[n]`.
1477    let mut block = block;
1478
1479    for drop_data in scope.drops.iter().rev() {
1480        let source_info = drop_data.source_info;
1481        let local = drop_data.local;
1482
1483        match drop_data.kind {
1484            DropKind::Value => {
1485                // `unwind_to` should drop the value that we're about to
1486                // schedule. If dropping this value panics, then we continue
1487                // with the *next* value on the unwind path.
1488                //
1489                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1490                // because `drops[n]` should unwind to `drops[n-1]`.
1491                debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
1492                debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
1493                unwind_to = unwind_drops.drops[unwind_to].next;
1494
1495                // If the operand has been moved, and we are not on an unwind
1496                // path, then don't generate the drop. (We only take this into
1497                // account for non-unwind paths so as not to disturb the
1498                // caching mechanism.)
1499                if scope.moved_locals.iter().any(|&o| o == local) {
1500                    continue;
1501                }
1502
1503                unwind_drops.add_entry_point(block, unwind_to);
1504                let next = cfg.start_new_block();
1505                cfg.terminate(
1506                    block,
1507                    source_info,
1508                    TerminatorKind::Drop {
1509                        place: local.into(),
1510                        target: next,
1511                        unwind: UnwindAction::Continue,
1512                        replace: false,
1513                    },
1514                );
1515                block = next;
1516            }
1517            DropKind::ForLint => {
1518                // As in the `DropKind::Storage` case below:
1519                // normally lint-related drops are not emitted for unwind,
1520                // so we can just leave `unwind_to` unmodified, but in some
1521                // cases we emit things ALSO on the unwind path, so we need to adjust
1522                // `unwind_to` in that case.
1523                if storage_dead_on_unwind {
1524                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
1525                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
1526                    unwind_to = unwind_drops.drops[unwind_to].next;
1527                }
1528
1529                // If the operand has been moved, and we are not on an unwind
1530                // path, then don't generate the drop. (We only take this into
1531                // account for non-unwind paths so as not to disturb the
1532                // caching mechanism.)
1533                if scope.moved_locals.iter().any(|&o| o == local) {
1534                    continue;
1535                }
1536
1537                cfg.push(
1538                    block,
1539                    Statement {
1540                        source_info,
1541                        kind: StatementKind::BackwardIncompatibleDropHint {
1542                            place: Box::new(local.into()),
1543                            reason: BackwardIncompatibleDropReason::Edition2024,
1544                        },
1545                    },
1546                );
1547            }
1548            DropKind::Storage => {
1549                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1550                // need to adjust `unwind_to` on this path. However, in some specific cases
1551                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1552                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1553                // so that any future drops we emit will not register storage-dead.
1554                if storage_dead_on_unwind {
1555                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
1556                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
1557                    unwind_to = unwind_drops.drops[unwind_to].next;
1558                }
1559                // Only temps and vars need their storage dead.
1560                assert!(local.index() > arg_count);
1561                cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1562            }
1563        }
1564    }
1565    block.unit()
1566}
1567
1568impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1569    /// Build a drop tree for a breakable scope.
1570    ///
1571    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1572    /// loop. Otherwise this is for `break` or `return`.
1573    fn build_exit_tree(
1574        &mut self,
1575        mut drops: DropTree,
1576        else_scope: region::Scope,
1577        span: Span,
1578        continue_block: Option<BasicBlock>,
1579    ) -> Option<BlockAnd<()>> {
1580        let blocks = drops.build_mir::<ExitScopes>(&mut self.cfg, continue_block);
1581        let is_coroutine = self.coroutine.is_some();
1582
1583        // Link the exit drop tree to unwind drop tree.
1584        if drops.drops.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1585            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1586            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1587            for (drop_idx, drop_node) in drops.drops.iter_enumerated().skip(1) {
1588                match drop_node.data.kind {
1589                    DropKind::Storage | DropKind::ForLint => {
1590                        if is_coroutine {
1591                            let unwind_drop = self
1592                                .scopes
1593                                .unwind_drops
1594                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1595                            unwind_indices.push(unwind_drop);
1596                        } else {
1597                            unwind_indices.push(unwind_indices[drop_node.next]);
1598                        }
1599                    }
1600                    DropKind::Value => {
1601                        let unwind_drop = self
1602                            .scopes
1603                            .unwind_drops
1604                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1605                        self.scopes.unwind_drops.add_entry_point(
1606                            blocks[drop_idx].unwrap(),
1607                            unwind_indices[drop_node.next],
1608                        );
1609                        unwind_indices.push(unwind_drop);
1610                    }
1611                }
1612            }
1613        }
1614        blocks[ROOT_NODE].map(BasicBlock::unit)
1615    }
1616
1617    /// Build the unwind and coroutine drop trees.
1618    pub(crate) fn build_drop_trees(&mut self) {
1619        if self.coroutine.is_some() {
1620            self.build_coroutine_drop_trees();
1621        } else {
1622            Self::build_unwind_tree(
1623                &mut self.cfg,
1624                &mut self.scopes.unwind_drops,
1625                self.fn_span,
1626                &mut None,
1627            );
1628        }
1629    }
1630
1631    fn build_coroutine_drop_trees(&mut self) {
1632        // Build the drop tree for dropping the coroutine while it's suspended.
1633        let drops = &mut self.scopes.coroutine_drops;
1634        let cfg = &mut self.cfg;
1635        let fn_span = self.fn_span;
1636        let blocks = drops.build_mir::<CoroutineDrop>(cfg, None);
1637        if let Some(root_block) = blocks[ROOT_NODE] {
1638            cfg.terminate(
1639                root_block,
1640                SourceInfo::outermost(fn_span),
1641                TerminatorKind::CoroutineDrop,
1642            );
1643        }
1644
1645        // Build the drop tree for unwinding in the normal control flow paths.
1646        let resume_block = &mut None;
1647        let unwind_drops = &mut self.scopes.unwind_drops;
1648        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1649
1650        // Build the drop tree for unwinding when dropping a suspended
1651        // coroutine.
1652        //
1653        // This is a different tree to the standard unwind paths here to
1654        // prevent drop elaboration from creating drop flags that would have
1655        // to be captured by the coroutine. I'm not sure how important this
1656        // optimization is, but it is here.
1657        for (drop_idx, drop_node) in drops.drops.iter_enumerated() {
1658            if let DropKind::Value = drop_node.data.kind {
1659                debug_assert!(drop_node.next < drops.drops.next_index());
1660                drops.entry_points.push((drop_node.next, blocks[drop_idx].unwrap()));
1661            }
1662        }
1663        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1664    }
1665
1666    fn build_unwind_tree(
1667        cfg: &mut CFG<'tcx>,
1668        drops: &mut DropTree,
1669        fn_span: Span,
1670        resume_block: &mut Option<BasicBlock>,
1671    ) {
1672        let blocks = drops.build_mir::<Unwind>(cfg, *resume_block);
1673        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1674            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
1675
1676            *resume_block = blocks[ROOT_NODE];
1677        }
1678    }
1679}
1680
1681// DropTreeBuilder implementations.
1682
1683struct ExitScopes;
1684
1685impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1686    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1687        cfg.start_new_block()
1688    }
1689    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1690        // There should be an existing terminator with real source info and a
1691        // dummy TerminatorKind. Replace it with a proper goto.
1692        // (The dummy is added by `break_scope` and `break_for_else`.)
1693        let term = cfg.block_data_mut(from).terminator_mut();
1694        if let TerminatorKind::UnwindResume = term.kind {
1695            term.kind = TerminatorKind::Goto { target: to };
1696        } else {
1697            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
1698        }
1699    }
1700}
1701
1702struct CoroutineDrop;
1703
1704impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
1705    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1706        cfg.start_new_block()
1707    }
1708    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1709        let term = cfg.block_data_mut(from).terminator_mut();
1710        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1711            *drop = Some(to);
1712        } else {
1713            span_bug!(
1714                term.source_info.span,
1715                "cannot enter coroutine drop tree from {:?}",
1716                term.kind
1717            )
1718        }
1719    }
1720}
1721
1722struct Unwind;
1723
1724impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1725    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1726        cfg.start_new_cleanup_block()
1727    }
1728    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1729        let term = &mut cfg.block_data_mut(from).terminator_mut();
1730        match &mut term.kind {
1731            TerminatorKind::Drop { unwind, .. } => {
1732                if let UnwindAction::Cleanup(unwind) = *unwind {
1733                    let source_info = term.source_info;
1734                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
1735                } else {
1736                    *unwind = UnwindAction::Cleanup(to);
1737                }
1738            }
1739            TerminatorKind::FalseUnwind { unwind, .. }
1740            | TerminatorKind::Call { unwind, .. }
1741            | TerminatorKind::Assert { unwind, .. }
1742            | TerminatorKind::InlineAsm { unwind, .. } => {
1743                *unwind = UnwindAction::Cleanup(to);
1744            }
1745            TerminatorKind::Goto { .. }
1746            | TerminatorKind::SwitchInt { .. }
1747            | TerminatorKind::UnwindResume
1748            | TerminatorKind::UnwindTerminate(_)
1749            | TerminatorKind::Return
1750            | TerminatorKind::TailCall { .. }
1751            | TerminatorKind::Unreachable
1752            | TerminatorKind::Yield { .. }
1753            | TerminatorKind::CoroutineDrop
1754            | TerminatorKind::FalseEdge { .. } => {
1755                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1756            }
1757        }
1758    }
1759}