rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use rustc_data_structures::fx::FxHashMap;
87use rustc_hir::HirId;
88use rustc_index::{IndexSlice, IndexVec};
89use rustc_middle::middle::region;
90use rustc_middle::mir::*;
91use rustc_middle::thir::{ExprId, LintLevel};
92use rustc_middle::{bug, span_bug};
93use rustc_session::lint::Level;
94use rustc_span::source_map::Spanned;
95use rustc_span::{DUMMY_SP, Span};
96use tracing::{debug, instrument};
97
98use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
99
100#[derive(Debug)]
101pub(crate) struct Scopes<'tcx> {
102    scopes: Vec<Scope>,
103
104    /// The current set of breakable scopes. See module comment for more details.
105    breakable_scopes: Vec<BreakableScope<'tcx>>,
106
107    /// The scope of the innermost if-then currently being lowered.
108    if_then_scope: Option<IfThenScope>,
109
110    /// Drops that need to be done on unwind paths. See the comment on
111    /// [DropTree] for more details.
112    unwind_drops: DropTree,
113
114    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
115    coroutine_drops: DropTree,
116}
117
118#[derive(Debug)]
119struct Scope {
120    /// The source scope this scope was created in.
121    source_scope: SourceScope,
122
123    /// the region span of this scope within source code.
124    region_scope: region::Scope,
125
126    /// set of places to drop when exiting this scope. This starts
127    /// out empty but grows as variables are declared during the
128    /// building process. This is a stack, so we always drop from the
129    /// end of the vector (top of the stack) first.
130    drops: Vec<DropData>,
131
132    moved_locals: Vec<Local>,
133
134    /// The drop index that will drop everything in and below this scope on an
135    /// unwind path.
136    cached_unwind_block: Option<DropIdx>,
137
138    /// The drop index that will drop everything in and below this scope on a
139    /// coroutine drop path.
140    cached_coroutine_drop_block: Option<DropIdx>,
141}
142
143#[derive(Clone, Copy, Debug)]
144struct DropData {
145    /// The `Span` where drop obligation was incurred (typically where place was
146    /// declared)
147    source_info: SourceInfo,
148
149    /// local to drop
150    local: Local,
151
152    /// Whether this is a value Drop or a StorageDead.
153    kind: DropKind,
154}
155
156#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
157pub(crate) enum DropKind {
158    Value,
159    Storage,
160    ForLint,
161}
162
163#[derive(Debug)]
164struct BreakableScope<'tcx> {
165    /// Region scope of the loop
166    region_scope: region::Scope,
167    /// The destination of the loop/block expression itself (i.e., where to put
168    /// the result of a `break` or `return` expression)
169    break_destination: Place<'tcx>,
170    /// Drops that happen on the `break`/`return` path.
171    break_drops: DropTree,
172    /// Drops that happen on the `continue` path.
173    continue_drops: Option<DropTree>,
174}
175
176#[derive(Debug)]
177struct IfThenScope {
178    /// The if-then scope or arm scope
179    region_scope: region::Scope,
180    /// Drops that happen on the `else` path.
181    else_drops: DropTree,
182}
183
184/// The target of an expression that breaks out of a scope
185#[derive(Clone, Copy, Debug)]
186pub(crate) enum BreakableTarget {
187    Continue(region::Scope),
188    Break(region::Scope),
189    Return,
190}
191
192rustc_index::newtype_index! {
193    #[orderable]
194    struct DropIdx {}
195}
196
197const ROOT_NODE: DropIdx = DropIdx::ZERO;
198
199/// A tree of drops that we have deferred lowering. It's used for:
200///
201/// * Drops on unwind paths
202/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
203/// * Drops on return and loop exit paths
204/// * Drops on the else path in an `if let` chain
205///
206/// Once no more nodes could be added to the tree, we lower it to MIR in one go
207/// in `build_mir`.
208#[derive(Debug)]
209struct DropTree {
210    /// Nodes in the drop tree, containing drop data and a link to the next node.
211    drops: IndexVec<DropIdx, DropNode>,
212    /// Map for finding the index of an existing node, given its contents.
213    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
214    /// Edges into the `DropTree` that need to be added once it's lowered.
215    entry_points: Vec<(DropIdx, BasicBlock)>,
216}
217
218/// A single node in the drop tree.
219#[derive(Debug)]
220struct DropNode {
221    /// Info about the drop to be performed at this node in the drop tree.
222    data: DropData,
223    /// Index of the "next" drop to perform (in drop order, not declaration order).
224    next: DropIdx,
225}
226
227/// Subset of [`DropNode`] used for reverse lookup in a hash table.
228#[derive(Debug, PartialEq, Eq, Hash)]
229struct DropNodeKey {
230    next: DropIdx,
231    local: Local,
232    kind: DropKind,
233}
234
235impl Scope {
236    /// Whether there's anything to do for the cleanup path, that is,
237    /// when unwinding through this scope. This includes destructors,
238    /// but not StorageDead statements, which don't get emitted at all
239    /// for unwinding, for several reasons:
240    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
241    ///  * LLVM's memory dependency analysis can't handle it atm
242    ///  * polluting the cleanup MIR with StorageDead creates
243    ///    landing pads even though there's no actual destructors
244    ///  * freeing up stack space has no effect during unwinding
245    /// Note that for coroutines we do emit StorageDeads, for the
246    /// use of optimizations in the MIR coroutine transform.
247    fn needs_cleanup(&self) -> bool {
248        self.drops.iter().any(|drop| match drop.kind {
249            DropKind::Value | DropKind::ForLint => true,
250            DropKind::Storage => false,
251        })
252    }
253
254    fn invalidate_cache(&mut self) {
255        self.cached_unwind_block = None;
256        self.cached_coroutine_drop_block = None;
257    }
258}
259
260/// A trait that determined how [DropTree] creates its blocks and
261/// links to any entry nodes.
262trait DropTreeBuilder<'tcx> {
263    /// Create a new block for the tree. This should call either
264    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
265    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
266
267    /// Links a block outside the drop tree, `from`, to the block `to` inside
268    /// the drop tree.
269    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
270}
271
272impl DropTree {
273    fn new() -> Self {
274        // The root node of the tree doesn't represent a drop, but instead
275        // represents the block in the tree that should be jumped to once all
276        // of the required drops have been performed.
277        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
278        let fake_data =
279            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
280        let drops = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
281        Self { drops, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
282    }
283
284    /// Adds a node to the drop tree, consisting of drop data and the index of
285    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
286    ///
287    /// If there is already an equivalent node in the tree, nothing is added, and
288    /// that node's index is returned. Otherwise, the new node's index is returned.
289    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
290        let drops = &mut self.drops;
291        *self
292            .existing_drops_map
293            .entry(DropNodeKey { next, local: data.local, kind: data.kind })
294            // Create a new node, and also add its index to the map.
295            .or_insert_with(|| drops.push(DropNode { data, next }))
296    }
297
298    /// Registers `from` as an entry point to this drop tree, at `to`.
299    ///
300    /// During [`Self::build_mir`], `from` will be linked to the corresponding
301    /// block within the drop tree.
302    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
303        debug_assert!(to < self.drops.next_index());
304        self.entry_points.push((to, from));
305    }
306
307    /// Builds the MIR for a given drop tree.
308    ///
309    /// `blocks` should have the same length as `self.drops`, and may have its
310    /// first value set to some already existing block.
311    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
312        &mut self,
313        cfg: &mut CFG<'tcx>,
314        blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
315    ) {
316        debug!("DropTree::build_mir(drops = {:#?})", self);
317        assert_eq!(blocks.len(), self.drops.len());
318
319        self.assign_blocks::<T>(cfg, blocks);
320        self.link_blocks(cfg, blocks)
321    }
322
323    /// Assign blocks for all of the drops in the drop tree that need them.
324    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
325        &mut self,
326        cfg: &mut CFG<'tcx>,
327        blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
328    ) {
329        // StorageDead statements can share blocks with each other and also with
330        // a Drop terminator. We iterate through the drops to find which drops
331        // need their own block.
332        #[derive(Clone, Copy)]
333        enum Block {
334            // This drop is unreachable
335            None,
336            // This drop is only reachable through the `StorageDead` with the
337            // specified index.
338            Shares(DropIdx),
339            // This drop has more than one way of being reached, or it is
340            // branched to from outside the tree, or its predecessor is a
341            // `Value` drop.
342            Own,
343        }
344
345        let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
346        if blocks[ROOT_NODE].is_some() {
347            // In some cases (such as drops for `continue`) the root node
348            // already has a block. In this case, make sure that we don't
349            // override it.
350            needs_block[ROOT_NODE] = Block::Own;
351        }
352
353        // Sort so that we only need to check the last value.
354        let entry_points = &mut self.entry_points;
355        entry_points.sort();
356
357        for (drop_idx, drop_node) in self.drops.iter_enumerated().rev() {
358            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
359                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
360                needs_block[drop_idx] = Block::Own;
361                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
362                    let entry_block = entry_points.pop().unwrap().1;
363                    T::link_entry_point(cfg, entry_block, block);
364                }
365            }
366            match needs_block[drop_idx] {
367                Block::None => continue,
368                Block::Own => {
369                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
370                }
371                Block::Shares(pred) => {
372                    blocks[drop_idx] = blocks[pred];
373                }
374            }
375            if let DropKind::Value = drop_node.data.kind {
376                needs_block[drop_node.next] = Block::Own;
377            } else if drop_idx != ROOT_NODE {
378                match &mut needs_block[drop_node.next] {
379                    pred @ Block::None => *pred = Block::Shares(drop_idx),
380                    pred @ Block::Shares(_) => *pred = Block::Own,
381                    Block::Own => (),
382                }
383            }
384        }
385
386        debug!("assign_blocks: blocks = {:#?}", blocks);
387        assert!(entry_points.is_empty());
388    }
389
390    fn link_blocks<'tcx>(
391        &self,
392        cfg: &mut CFG<'tcx>,
393        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
394    ) {
395        for (drop_idx, drop_node) in self.drops.iter_enumerated().rev() {
396            let Some(block) = blocks[drop_idx] else { continue };
397            match drop_node.data.kind {
398                DropKind::Value => {
399                    let terminator = TerminatorKind::Drop {
400                        target: blocks[drop_node.next].unwrap(),
401                        // The caller will handle this if needed.
402                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
403                        place: drop_node.data.local.into(),
404                        replace: false,
405                    };
406                    cfg.terminate(block, drop_node.data.source_info, terminator);
407                }
408                DropKind::ForLint => {
409                    let stmt = Statement {
410                        source_info: drop_node.data.source_info,
411                        kind: StatementKind::BackwardIncompatibleDropHint {
412                            place: Box::new(drop_node.data.local.into()),
413                            reason: BackwardIncompatibleDropReason::Edition2024,
414                        },
415                    };
416                    cfg.push(block, stmt);
417                    let target = blocks[drop_node.next].unwrap();
418                    if target != block {
419                        // Diagnostics don't use this `Span` but debuginfo
420                        // might. Since we don't want breakpoints to be placed
421                        // here, especially when this is on an unwind path, we
422                        // use `DUMMY_SP`.
423                        let source_info =
424                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
425                        let terminator = TerminatorKind::Goto { target };
426                        cfg.terminate(block, source_info, terminator);
427                    }
428                }
429                // Root nodes don't correspond to a drop.
430                DropKind::Storage if drop_idx == ROOT_NODE => {}
431                DropKind::Storage => {
432                    let stmt = Statement {
433                        source_info: drop_node.data.source_info,
434                        kind: StatementKind::StorageDead(drop_node.data.local),
435                    };
436                    cfg.push(block, stmt);
437                    let target = blocks[drop_node.next].unwrap();
438                    if target != block {
439                        // Diagnostics don't use this `Span` but debuginfo
440                        // might. Since we don't want breakpoints to be placed
441                        // here, especially when this is on an unwind path, we
442                        // use `DUMMY_SP`.
443                        let source_info =
444                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
445                        let terminator = TerminatorKind::Goto { target };
446                        cfg.terminate(block, source_info, terminator);
447                    }
448                }
449            }
450        }
451    }
452}
453
454impl<'tcx> Scopes<'tcx> {
455    pub(crate) fn new() -> Self {
456        Self {
457            scopes: Vec::new(),
458            breakable_scopes: Vec::new(),
459            if_then_scope: None,
460            unwind_drops: DropTree::new(),
461            coroutine_drops: DropTree::new(),
462        }
463    }
464
465    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
466        debug!("push_scope({:?})", region_scope);
467        self.scopes.push(Scope {
468            source_scope: vis_scope,
469            region_scope: region_scope.0,
470            drops: vec![],
471            moved_locals: vec![],
472            cached_unwind_block: None,
473            cached_coroutine_drop_block: None,
474        });
475    }
476
477    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
478        let scope = self.scopes.pop().unwrap();
479        assert_eq!(scope.region_scope, region_scope.0);
480        scope
481    }
482
483    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
484        self.scopes
485            .iter()
486            .rposition(|scope| scope.region_scope == region_scope)
487            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
488    }
489
490    /// Returns the topmost active scope, which is known to be alive until
491    /// the next scope expression.
492    fn topmost(&self) -> region::Scope {
493        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
494    }
495}
496
497impl<'a, 'tcx> Builder<'a, 'tcx> {
498    // Adding and removing scopes
499    // ==========================
500
501    ///  Start a breakable scope, which tracks where `continue`, `break` and
502    ///  `return` should branch to.
503    pub(crate) fn in_breakable_scope<F>(
504        &mut self,
505        loop_block: Option<BasicBlock>,
506        break_destination: Place<'tcx>,
507        span: Span,
508        f: F,
509    ) -> BlockAnd<()>
510    where
511        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
512    {
513        let region_scope = self.scopes.topmost();
514        let scope = BreakableScope {
515            region_scope,
516            break_destination,
517            break_drops: DropTree::new(),
518            continue_drops: loop_block.map(|_| DropTree::new()),
519        };
520        self.scopes.breakable_scopes.push(scope);
521        let normal_exit_block = f(self);
522        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
523        assert!(breakable_scope.region_scope == region_scope);
524        let break_block =
525            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
526        if let Some(drops) = breakable_scope.continue_drops {
527            self.build_exit_tree(drops, region_scope, span, loop_block);
528        }
529        match (normal_exit_block, break_block) {
530            (Some(block), None) | (None, Some(block)) => block,
531            (None, None) => self.cfg.start_new_block().unit(),
532            (Some(normal_block), Some(exit_block)) => {
533                let target = self.cfg.start_new_block();
534                let source_info = self.source_info(span);
535                self.cfg.terminate(
536                    normal_block.into_block(),
537                    source_info,
538                    TerminatorKind::Goto { target },
539                );
540                self.cfg.terminate(
541                    exit_block.into_block(),
542                    source_info,
543                    TerminatorKind::Goto { target },
544                );
545                target.unit()
546            }
547        }
548    }
549
550    /// Start an if-then scope which tracks drop for `if` expressions and `if`
551    /// guards.
552    ///
553    /// For an if-let chain:
554    ///
555    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
556    ///
557    /// There are three possible ways the condition can be false and we may have
558    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
559    /// To handle this correctly we use a `DropTree` in a similar way to a
560    /// `loop` expression and 'break' out on all of the 'else' paths.
561    ///
562    /// Notes:
563    /// - We don't need to keep a stack of scopes in the `Builder` because the
564    ///   'else' paths will only leave the innermost scope.
565    /// - This is also used for match guards.
566    pub(crate) fn in_if_then_scope<F>(
567        &mut self,
568        region_scope: region::Scope,
569        span: Span,
570        f: F,
571    ) -> (BasicBlock, BasicBlock)
572    where
573        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
574    {
575        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
576        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
577
578        let then_block = f(self).into_block();
579
580        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
581        assert!(if_then_scope.region_scope == region_scope);
582
583        let else_block =
584            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
585                || self.cfg.start_new_block(),
586                |else_block_and| else_block_and.into_block(),
587            );
588
589        (then_block, else_block)
590    }
591
592    /// Convenience wrapper that pushes a scope and then executes `f`
593    /// to build its contents, popping the scope afterwards.
594    #[instrument(skip(self, f), level = "debug")]
595    pub(crate) fn in_scope<F, R>(
596        &mut self,
597        region_scope: (region::Scope, SourceInfo),
598        lint_level: LintLevel,
599        f: F,
600    ) -> BlockAnd<R>
601    where
602        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
603    {
604        let source_scope = self.source_scope;
605        if let LintLevel::Explicit(current_hir_id) = lint_level {
606            let parent_id =
607                self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root;
608            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
609        }
610        self.push_scope(region_scope);
611        let mut block;
612        let rv = unpack!(block = f(self));
613        block = self.pop_scope(region_scope, block).into_block();
614        self.source_scope = source_scope;
615        debug!(?block);
616        block.and(rv)
617    }
618
619    /// Push a scope onto the stack. You can then build code in this
620    /// scope and call `pop_scope` afterwards. Note that these two
621    /// calls must be paired; using `in_scope` as a convenience
622    /// wrapper maybe preferable.
623    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
624        self.scopes.push_scope(region_scope, self.source_scope);
625    }
626
627    /// Pops a scope, which should have region scope `region_scope`,
628    /// adding any drops onto the end of `block` that are needed.
629    /// This must match 1-to-1 with `push_scope`.
630    pub(crate) fn pop_scope(
631        &mut self,
632        region_scope: (region::Scope, SourceInfo),
633        mut block: BasicBlock,
634    ) -> BlockAnd<()> {
635        debug!("pop_scope({:?}, {:?})", region_scope, block);
636
637        block = self.leave_top_scope(block);
638
639        self.scopes.pop_scope(region_scope);
640
641        block.unit()
642    }
643
644    /// Sets up the drops for breaking from `block` to `target`.
645    pub(crate) fn break_scope(
646        &mut self,
647        mut block: BasicBlock,
648        value: Option<ExprId>,
649        target: BreakableTarget,
650        source_info: SourceInfo,
651    ) -> BlockAnd<()> {
652        let span = source_info.span;
653
654        let get_scope_index = |scope: region::Scope| {
655            // find the loop-scope by its `region::Scope`.
656            self.scopes
657                .breakable_scopes
658                .iter()
659                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
660                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
661        };
662        let (break_index, destination) = match target {
663            BreakableTarget::Return => {
664                let scope = &self.scopes.breakable_scopes[0];
665                if scope.break_destination != Place::return_place() {
666                    span_bug!(span, "`return` in item with no return scope");
667                }
668                (0, Some(scope.break_destination))
669            }
670            BreakableTarget::Break(scope) => {
671                let break_index = get_scope_index(scope);
672                let scope = &self.scopes.breakable_scopes[break_index];
673                (break_index, Some(scope.break_destination))
674            }
675            BreakableTarget::Continue(scope) => {
676                let break_index = get_scope_index(scope);
677                (break_index, None)
678            }
679        };
680
681        match (destination, value) {
682            (Some(destination), Some(value)) => {
683                debug!("stmt_expr Break val block_context.push(SubExpr)");
684                self.block_context.push(BlockFrame::SubExpr);
685                block = self.expr_into_dest(destination, block, value).into_block();
686                self.block_context.pop();
687            }
688            (Some(destination), None) => {
689                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
690            }
691            (None, Some(_)) => {
692                panic!("`return`, `become` and `break` with value and must have a destination")
693            }
694            (None, None) => {
695                if self.tcx.sess.instrument_coverage() {
696                    // Normally we wouldn't build any MIR in this case, but that makes it
697                    // harder for coverage instrumentation to extract a relevant span for
698                    // `continue` expressions. So here we inject a dummy statement with the
699                    // desired span.
700                    self.cfg.push_coverage_span_marker(block, source_info);
701                }
702            }
703        }
704
705        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
706        let scope_index = self.scopes.scope_index(region_scope, span);
707        let drops = if destination.is_some() {
708            &mut self.scopes.breakable_scopes[break_index].break_drops
709        } else {
710            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
711            else {
712                self.tcx.dcx().span_delayed_bug(
713                    source_info.span,
714                    "unlabelled `continue` within labelled block",
715                );
716                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
717
718                return self.cfg.start_new_block().unit();
719            };
720            drops
721        };
722
723        let drop_idx = self.scopes.scopes[scope_index + 1..]
724            .iter()
725            .flat_map(|scope| &scope.drops)
726            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
727
728        drops.add_entry_point(block, drop_idx);
729
730        // `build_drop_trees` doesn't have access to our source_info, so we
731        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
732        // because MIR type checking will panic if it hasn't been overwritten.
733        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
734        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
735
736        self.cfg.start_new_block().unit()
737    }
738
739    /// Sets up the drops for breaking from `block` due to an `if` condition
740    /// that turned out to be false.
741    ///
742    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
743    /// there is an if-then scope to tell us what the target scope is.
744    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
745        let if_then_scope = self
746            .scopes
747            .if_then_scope
748            .as_ref()
749            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
750
751        let target = if_then_scope.region_scope;
752        let scope_index = self.scopes.scope_index(target, source_info.span);
753
754        // Upgrade `if_then_scope` to `&mut`.
755        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
756
757        let mut drop_idx = ROOT_NODE;
758        let drops = &mut if_then_scope.else_drops;
759        for scope in &self.scopes.scopes[scope_index + 1..] {
760            for drop in &scope.drops {
761                drop_idx = drops.add_drop(*drop, drop_idx);
762            }
763        }
764        drops.add_entry_point(block, drop_idx);
765
766        // `build_drop_trees` doesn't have access to our source_info, so we
767        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
768        // because MIR type checking will panic if it hasn't been overwritten.
769        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
770        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
771    }
772
773    /// Sets up the drops for explicit tail calls.
774    ///
775    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
776    /// Instead, all scheduled drops are immediately added to the CFG.
777    pub(crate) fn break_for_tail_call(
778        &mut self,
779        mut block: BasicBlock,
780        args: &[Spanned<Operand<'tcx>>],
781        source_info: SourceInfo,
782    ) -> BlockAnd<()> {
783        let arg_drops: Vec<_> = args
784            .iter()
785            .rev()
786            .filter_map(|arg| match &arg.node {
787                Operand::Copy(_) => bug!("copy op in tail call args"),
788                Operand::Move(place) => {
789                    let local =
790                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
791
792                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
793                        return None;
794                    }
795
796                    Some(DropData { source_info, local, kind: DropKind::Value })
797                }
798                Operand::Constant(_) => None,
799            })
800            .collect();
801
802        let mut unwind_to = self.diverge_cleanup_target(
803            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
804            DUMMY_SP,
805        );
806        let typing_env = self.typing_env();
807        let unwind_drops = &mut self.scopes.unwind_drops;
808
809        // the innermost scope contains only the destructors for the tail call arguments
810        // we only want to drop these in case of a panic, so we skip it
811        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
812            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
813            for drop_data in scope.drops.iter().rev() {
814                let source_info = drop_data.source_info;
815                let local = drop_data.local;
816
817                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
818                    continue;
819                }
820
821                match drop_data.kind {
822                    DropKind::Value => {
823                        // `unwind_to` should drop the value that we're about to
824                        // schedule. If dropping this value panics, then we continue
825                        // with the *next* value on the unwind path.
826                        debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
827                        debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
828                        unwind_to = unwind_drops.drops[unwind_to].next;
829
830                        let mut unwind_entry_point = unwind_to;
831
832                        // the tail call arguments must be dropped if any of these drops panic
833                        for drop in arg_drops.iter().copied() {
834                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
835                        }
836
837                        unwind_drops.add_entry_point(block, unwind_entry_point);
838
839                        let next = self.cfg.start_new_block();
840                        self.cfg.terminate(
841                            block,
842                            source_info,
843                            TerminatorKind::Drop {
844                                place: local.into(),
845                                target: next,
846                                unwind: UnwindAction::Continue,
847                                replace: false,
848                            },
849                        );
850                        block = next;
851                    }
852                    DropKind::ForLint => {
853                        self.cfg.push(
854                            block,
855                            Statement {
856                                source_info,
857                                kind: StatementKind::BackwardIncompatibleDropHint {
858                                    place: Box::new(local.into()),
859                                    reason: BackwardIncompatibleDropReason::Edition2024,
860                                },
861                            },
862                        );
863                    }
864                    DropKind::Storage => {
865                        // Only temps and vars need their storage dead.
866                        assert!(local.index() > self.arg_count);
867                        self.cfg.push(
868                            block,
869                            Statement { source_info, kind: StatementKind::StorageDead(local) },
870                        );
871                    }
872                }
873            }
874        }
875
876        block.unit()
877    }
878
879    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
880        // If we are emitting a `drop` statement, we need to have the cached
881        // diverge cleanup pads ready in case that drop panics.
882        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
883        let is_coroutine = self.coroutine.is_some();
884        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
885
886        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
887        build_scope_drops(
888            &mut self.cfg,
889            &mut self.scopes.unwind_drops,
890            scope,
891            block,
892            unwind_to,
893            is_coroutine && needs_cleanup,
894            self.arg_count,
895        )
896        .into_block()
897    }
898
899    /// Possibly creates a new source scope if `current_root` and `parent_root`
900    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
901    pub(crate) fn maybe_new_source_scope(
902        &mut self,
903        span: Span,
904        current_id: HirId,
905        parent_id: HirId,
906    ) {
907        let (current_root, parent_root) =
908            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
909                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
910                // the only part of rustc that tracks MIR -> HIR is the
911                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
912                // locations. Normally the number of source scopes is limited to the set of nodes
913                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
914                // behavior to maximize the number of source scopes, increasing the granularity of
915                // the MIR->HIR mapping.
916                (current_id, parent_id)
917            } else {
918                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
919                // parents. We estimate the true lint roots here to avoid creating a lot of source
920                // scopes.
921                (
922                    self.maybe_lint_level_root_bounded(current_id),
923                    if parent_id == self.hir_id {
924                        parent_id // this is very common
925                    } else {
926                        self.maybe_lint_level_root_bounded(parent_id)
927                    },
928                )
929            };
930
931        if current_root != parent_root {
932            let lint_level = LintLevel::Explicit(current_root);
933            self.source_scope = self.new_source_scope(span, lint_level);
934        }
935    }
936
937    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
938    /// It stops at `self.hir_id` and just returns it if reached.
939    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
940        // This assertion lets us just store `ItemLocalId` in the cache, rather
941        // than the full `HirId`.
942        assert_eq!(orig_id.owner, self.hir_id.owner);
943
944        let mut id = orig_id;
945        let hir = self.tcx.hir();
946        loop {
947            if id == self.hir_id {
948                // This is a moderately common case, mostly hit for previously unseen nodes.
949                break;
950            }
951
952            if hir.attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
953                // This is a rare case. It's for a node path that doesn't reach the root due to an
954                // intervening lint level attribute. This result doesn't get cached.
955                return id;
956            }
957
958            let next = self.tcx.parent_hir_id(id);
959            if next == id {
960                bug!("lint traversal reached the root of the crate");
961            }
962            id = next;
963
964            // This lookup is just an optimization; it can be removed without affecting
965            // functionality. It might seem strange to see this at the end of this loop, but the
966            // `orig_id` passed in to this function is almost always previously unseen, for which a
967            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
968            // cache lookups have a very high hit rate.
969            if self.lint_level_roots_cache.contains(id.local_id) {
970                break;
971            }
972        }
973
974        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
975        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
976        // The performance wins from the cache all come from caching non-leaf nodes.
977        self.lint_level_roots_cache.insert(orig_id.local_id);
978        self.hir_id
979    }
980
981    /// Creates a new source scope, nested in the current one.
982    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
983        let parent = self.source_scope;
984        debug!(
985            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
986            span,
987            lint_level,
988            parent,
989            self.source_scopes.get(parent)
990        );
991        let scope_local_data = SourceScopeLocalData {
992            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
993                lint_root
994            } else {
995                self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
996            },
997        };
998        self.source_scopes.push(SourceScopeData {
999            span,
1000            parent_scope: Some(parent),
1001            inlined: None,
1002            inlined_parent_scope: None,
1003            local_data: ClearCrossCrate::Set(scope_local_data),
1004        })
1005    }
1006
1007    /// Given a span and the current source scope, make a SourceInfo.
1008    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1009        SourceInfo { span, scope: self.source_scope }
1010    }
1011
1012    // Finding scopes
1013    // ==============
1014
1015    /// Returns the scope that we should use as the lifetime of an
1016    /// operand. Basically, an operand must live until it is consumed.
1017    /// This is similar to, but not quite the same as, the temporary
1018    /// scope (which can be larger or smaller).
1019    ///
1020    /// Consider:
1021    /// ```ignore (illustrative)
1022    /// let x = foo(bar(X, Y));
1023    /// ```
1024    /// We wish to pop the storage for X and Y after `bar()` is
1025    /// called, not after the whole `let` is completed.
1026    ///
1027    /// As another example, if the second argument diverges:
1028    /// ```ignore (illustrative)
1029    /// foo(Box::new(2), panic!())
1030    /// ```
1031    /// We would allocate the box but then free it on the unwinding
1032    /// path; we would also emit a free on the 'success' path from
1033    /// panic, but that will turn out to be removed as dead-code.
1034    pub(crate) fn local_scope(&self) -> region::Scope {
1035        self.scopes.topmost()
1036    }
1037
1038    // Scheduling drops
1039    // ================
1040
1041    pub(crate) fn schedule_drop_storage_and_value(
1042        &mut self,
1043        span: Span,
1044        region_scope: region::Scope,
1045        local: Local,
1046    ) {
1047        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1048        self.schedule_drop(span, region_scope, local, DropKind::Value);
1049    }
1050
1051    /// Indicates that `place` should be dropped on exit from `region_scope`.
1052    ///
1053    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1054    /// place, or a function parameter.
1055    pub(crate) fn schedule_drop(
1056        &mut self,
1057        span: Span,
1058        region_scope: region::Scope,
1059        local: Local,
1060        drop_kind: DropKind,
1061    ) {
1062        let needs_drop = match drop_kind {
1063            DropKind::Value | DropKind::ForLint => {
1064                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1065                    return;
1066                }
1067                true
1068            }
1069            DropKind::Storage => {
1070                if local.index() <= self.arg_count {
1071                    span_bug!(
1072                        span,
1073                        "`schedule_drop` called with body argument {:?} \
1074                        but its storage does not require a drop",
1075                        local,
1076                    )
1077                }
1078                false
1079            }
1080        };
1081
1082        // When building drops, we try to cache chains of drops to reduce the
1083        // number of `DropTree::add_drop` calls. This, however, means that
1084        // whenever we add a drop into a scope which already had some entries
1085        // in the drop tree built (and thus, cached) for it, we must invalidate
1086        // all caches which might branch into the scope which had a drop just
1087        // added to it. This is necessary, because otherwise some other code
1088        // might use the cache to branch into already built chain of drops,
1089        // essentially ignoring the newly added drop.
1090        //
1091        // For example consider there’s two scopes with a drop in each. These
1092        // are built and thus the caches are filled:
1093        //
1094        // +--------------------------------------------------------+
1095        // | +---------------------------------+                    |
1096        // | | +--------+     +-------------+  |  +---------------+ |
1097        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1098        // | | +--------+     +-------------+  |  +---------------+ |
1099        // | +------------|outer_scope cache|--+                    |
1100        // +------------------------------|middle_scope cache|------+
1101        //
1102        // Now, a new, innermost scope is added along with a new drop into
1103        // both innermost and outermost scopes:
1104        //
1105        // +------------------------------------------------------------+
1106        // | +----------------------------------+                       |
1107        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1108        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1109        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1110        // | |             +-+ +-------------+  |                       |
1111        // | +---|invalid outer_scope cache|----+                       |
1112        // +----=----------------|invalid middle_scope cache|-----------+
1113        //
1114        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1115        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1116        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1117        // wrong results.
1118        //
1119        // Note that this code iterates scopes from the innermost to the outermost,
1120        // invalidating caches of each scope visited. This way bare minimum of the
1121        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1122        // cache of outer scope stays intact.
1123        //
1124        // Since we only cache drops for the unwind path and the coroutine drop
1125        // path, we only need to invalidate the cache for drops that happen on
1126        // the unwind or coroutine drop paths. This means that for
1127        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1128        let invalidate_caches = needs_drop || self.coroutine.is_some();
1129        for scope in self.scopes.scopes.iter_mut().rev() {
1130            if invalidate_caches {
1131                scope.invalidate_cache();
1132            }
1133
1134            if scope.region_scope == region_scope {
1135                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1136                // Attribute scope exit drops to scope's closing brace.
1137                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1138
1139                scope.drops.push(DropData {
1140                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1141                    local,
1142                    kind: drop_kind,
1143                });
1144
1145                return;
1146            }
1147        }
1148
1149        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1150    }
1151
1152    /// Schedule emission of a backwards incompatible drop lint hint.
1153    /// Applicable only to temporary values for now.
1154    #[instrument(level = "debug", skip(self))]
1155    pub(crate) fn schedule_backwards_incompatible_drop(
1156        &mut self,
1157        span: Span,
1158        region_scope: region::Scope,
1159        local: Local,
1160    ) {
1161        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1162        // We need to know all of them so that we can capture potential borrow-checking errors.
1163        for scope in self.scopes.scopes.iter_mut().rev() {
1164            // Since we are inserting linting MIR statement, we have to invalidate the caches
1165            scope.invalidate_cache();
1166            if scope.region_scope == region_scope {
1167                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1168                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1169
1170                scope.drops.push(DropData {
1171                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1172                    local,
1173                    kind: DropKind::ForLint,
1174                });
1175
1176                return;
1177            }
1178        }
1179        span_bug!(
1180            span,
1181            "region scope {:?} not in scope to drop {:?} for linting",
1182            region_scope,
1183            local
1184        );
1185    }
1186
1187    /// Indicates that the "local operand" stored in `local` is
1188    /// *moved* at some point during execution (see `local_scope` for
1189    /// more information about what a "local operand" is -- in short,
1190    /// it's an intermediate operand created as part of preparing some
1191    /// MIR instruction). We use this information to suppress
1192    /// redundant drops on the non-unwind paths. This results in less
1193    /// MIR, but also avoids spurious borrow check errors
1194    /// (c.f. #64391).
1195    ///
1196    /// Example: when compiling the call to `foo` here:
1197    ///
1198    /// ```ignore (illustrative)
1199    /// foo(bar(), ...)
1200    /// ```
1201    ///
1202    /// we would evaluate `bar()` to an operand `_X`. We would also
1203    /// schedule `_X` to be dropped when the expression scope for
1204    /// `foo(bar())` is exited. This is relevant, for example, if the
1205    /// later arguments should unwind (it would ensure that `_X` gets
1206    /// dropped). However, if no unwind occurs, then `_X` will be
1207    /// unconditionally consumed by the `call`:
1208    ///
1209    /// ```ignore (illustrative)
1210    /// bb {
1211    ///   ...
1212    ///   _R = CALL(foo, _X, ...)
1213    /// }
1214    /// ```
1215    ///
1216    /// However, `_X` is still registered to be dropped, and so if we
1217    /// do nothing else, we would generate a `DROP(_X)` that occurs
1218    /// after the call. This will later be optimized out by the
1219    /// drop-elaboration code, but in the meantime it can lead to
1220    /// spurious borrow-check errors -- the problem, ironically, is
1221    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1222    /// that it creates. See #64391 for an example.
1223    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1224        let local_scope = self.local_scope();
1225        let scope = self.scopes.scopes.last_mut().unwrap();
1226
1227        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1228
1229        // look for moves of a local variable, like `MOVE(_X)`
1230        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1231            Operand::Copy(_) | Operand::Constant(_) => None,
1232            Operand::Move(place) => place.as_local(),
1233        });
1234
1235        for local in locals_moved {
1236            // check if we have a Drop for this operand and -- if so
1237            // -- add it to the list of moved operands. Note that this
1238            // local might not have been an operand created for this
1239            // call, it could come from other places too.
1240            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1241                scope.moved_locals.push(local);
1242            }
1243        }
1244    }
1245
1246    // Other
1247    // =====
1248
1249    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1250    /// this point. The `DropIdx` will be created if it doesn't already exist.
1251    fn diverge_cleanup(&mut self) -> DropIdx {
1252        // It is okay to use dummy span because the getting scope index on the topmost scope
1253        // must always succeed.
1254        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1255    }
1256
1257    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1258    /// some ancestor scope instead of the current scope.
1259    /// It is possible to unwind to some ancestor scope if some drop panics as
1260    /// the program breaks out of a if-then scope.
1261    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1262        let target = self.scopes.scope_index(target_scope, span);
1263        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1264            .iter()
1265            .enumerate()
1266            .rev()
1267            .find_map(|(scope_idx, scope)| {
1268                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1269            })
1270            .unwrap_or((0, ROOT_NODE));
1271
1272        if uncached_scope > target {
1273            return cached_drop;
1274        }
1275
1276        let is_coroutine = self.coroutine.is_some();
1277        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1278            for drop in &scope.drops {
1279                if is_coroutine || drop.kind == DropKind::Value {
1280                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1281                }
1282            }
1283            scope.cached_unwind_block = Some(cached_drop);
1284        }
1285
1286        cached_drop
1287    }
1288
1289    /// Prepares to create a path that performs all required cleanup for a
1290    /// terminator that can unwind at the given basic block.
1291    ///
1292    /// This path terminates in Resume. The path isn't created until after all
1293    /// of the non-unwind paths in this item have been lowered.
1294    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1295        debug_assert!(
1296            matches!(
1297                self.cfg.block_data(start).terminator().kind,
1298                TerminatorKind::Assert { .. }
1299                    | TerminatorKind::Call { .. }
1300                    | TerminatorKind::Drop { .. }
1301                    | TerminatorKind::FalseUnwind { .. }
1302                    | TerminatorKind::InlineAsm { .. }
1303            ),
1304            "diverge_from called on block with terminator that cannot unwind."
1305        );
1306
1307        let next_drop = self.diverge_cleanup();
1308        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1309    }
1310
1311    /// Sets up a path that performs all required cleanup for dropping a
1312    /// coroutine, starting from the given block that ends in
1313    /// [TerminatorKind::Yield].
1314    ///
1315    /// This path terminates in CoroutineDrop.
1316    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1317        debug_assert!(
1318            matches!(
1319                self.cfg.block_data(yield_block).terminator().kind,
1320                TerminatorKind::Yield { .. }
1321            ),
1322            "coroutine_drop_cleanup called on block with non-yield terminator."
1323        );
1324        let (uncached_scope, mut cached_drop) = self
1325            .scopes
1326            .scopes
1327            .iter()
1328            .enumerate()
1329            .rev()
1330            .find_map(|(scope_idx, scope)| {
1331                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1332            })
1333            .unwrap_or((0, ROOT_NODE));
1334
1335        for scope in &mut self.scopes.scopes[uncached_scope..] {
1336            for drop in &scope.drops {
1337                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1338            }
1339            scope.cached_coroutine_drop_block = Some(cached_drop);
1340        }
1341
1342        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1343    }
1344
1345    /// Utility function for *non*-scope code to build their own drops
1346    /// Force a drop at this point in the MIR by creating a new block.
1347    pub(crate) fn build_drop_and_replace(
1348        &mut self,
1349        block: BasicBlock,
1350        span: Span,
1351        place: Place<'tcx>,
1352        value: Rvalue<'tcx>,
1353    ) -> BlockAnd<()> {
1354        let source_info = self.source_info(span);
1355
1356        // create the new block for the assignment
1357        let assign = self.cfg.start_new_block();
1358        self.cfg.push_assign(assign, source_info, place, value.clone());
1359
1360        // create the new block for the assignment in the case of unwinding
1361        let assign_unwind = self.cfg.start_new_cleanup_block();
1362        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1363
1364        self.cfg.terminate(
1365            block,
1366            source_info,
1367            TerminatorKind::Drop {
1368                place,
1369                target: assign,
1370                unwind: UnwindAction::Cleanup(assign_unwind),
1371                replace: true,
1372            },
1373        );
1374        self.diverge_from(block);
1375
1376        assign.unit()
1377    }
1378
1379    /// Creates an `Assert` terminator and return the success block.
1380    /// If the boolean condition operand is not the expected value,
1381    /// a runtime panic will be caused with the given message.
1382    pub(crate) fn assert(
1383        &mut self,
1384        block: BasicBlock,
1385        cond: Operand<'tcx>,
1386        expected: bool,
1387        msg: AssertMessage<'tcx>,
1388        span: Span,
1389    ) -> BasicBlock {
1390        let source_info = self.source_info(span);
1391        let success_block = self.cfg.start_new_block();
1392
1393        self.cfg.terminate(
1394            block,
1395            source_info,
1396            TerminatorKind::Assert {
1397                cond,
1398                expected,
1399                msg: Box::new(msg),
1400                target: success_block,
1401                unwind: UnwindAction::Continue,
1402            },
1403        );
1404        self.diverge_from(block);
1405
1406        success_block
1407    }
1408
1409    /// Unschedules any drops in the top scope.
1410    ///
1411    /// This is only needed for `match` arm scopes, because they have one
1412    /// entrance per pattern, but only one exit.
1413    pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1414        let top_scope = self.scopes.scopes.last_mut().unwrap();
1415
1416        assert_eq!(top_scope.region_scope, region_scope);
1417
1418        top_scope.drops.clear();
1419        top_scope.invalidate_cache();
1420    }
1421}
1422
1423/// Builds drops for `pop_scope` and `leave_top_scope`.
1424///
1425/// # Parameters
1426///
1427/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1428/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1429/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1430/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1431///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1432///   instructions on unwinding)
1433/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1434/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1435fn build_scope_drops<'tcx>(
1436    cfg: &mut CFG<'tcx>,
1437    unwind_drops: &mut DropTree,
1438    scope: &Scope,
1439    block: BasicBlock,
1440    unwind_to: DropIdx,
1441    storage_dead_on_unwind: bool,
1442    arg_count: usize,
1443) -> BlockAnd<()> {
1444    debug!("build_scope_drops({:?} -> {:?})", block, scope);
1445
1446    // Build up the drops in evaluation order. The end result will
1447    // look like:
1448    //
1449    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1450    //               |                    |                 |
1451    //               :                    |                 |
1452    //                                    V                 V
1453    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1454    //
1455    // The horizontal arrows represent the execution path when the drops return
1456    // successfully. The downwards arrows represent the execution path when the
1457    // drops panic (panicking while unwinding will abort, so there's no need for
1458    // another set of arrows).
1459    //
1460    // For coroutines, we unwind from a drop on a local to its StorageDead
1461    // statement. For other functions we don't worry about StorageDead. The
1462    // drops for the unwind path should have already been generated by
1463    // `diverge_cleanup_gen`.
1464
1465    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1466    // This is a subset of what needs to be dropped when exiting the scope.
1467    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1468    // so that we can use it should a destructor panic.
1469    let mut unwind_to = unwind_to;
1470
1471    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1472    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1473    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1474    // will branch to `drops[n]`.
1475    let mut block = block;
1476
1477    for drop_data in scope.drops.iter().rev() {
1478        let source_info = drop_data.source_info;
1479        let local = drop_data.local;
1480
1481        match drop_data.kind {
1482            DropKind::Value => {
1483                // `unwind_to` should drop the value that we're about to
1484                // schedule. If dropping this value panics, then we continue
1485                // with the *next* value on the unwind path.
1486                //
1487                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1488                // because `drops[n]` should unwind to `drops[n-1]`.
1489                debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
1490                debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
1491                unwind_to = unwind_drops.drops[unwind_to].next;
1492
1493                // If the operand has been moved, and we are not on an unwind
1494                // path, then don't generate the drop. (We only take this into
1495                // account for non-unwind paths so as not to disturb the
1496                // caching mechanism.)
1497                if scope.moved_locals.iter().any(|&o| o == local) {
1498                    continue;
1499                }
1500
1501                unwind_drops.add_entry_point(block, unwind_to);
1502                let next = cfg.start_new_block();
1503                cfg.terminate(
1504                    block,
1505                    source_info,
1506                    TerminatorKind::Drop {
1507                        place: local.into(),
1508                        target: next,
1509                        unwind: UnwindAction::Continue,
1510                        replace: false,
1511                    },
1512                );
1513                block = next;
1514            }
1515            DropKind::ForLint => {
1516                // As in the `DropKind::Storage` case below:
1517                // normally lint-related drops are not emitted for unwind,
1518                // so we can just leave `unwind_to` unmodified, but in some
1519                // cases we emit things ALSO on the unwind path, so we need to adjust
1520                // `unwind_to` in that case.
1521                if storage_dead_on_unwind {
1522                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
1523                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
1524                    unwind_to = unwind_drops.drops[unwind_to].next;
1525                }
1526
1527                // If the operand has been moved, and we are not on an unwind
1528                // path, then don't generate the drop. (We only take this into
1529                // account for non-unwind paths so as not to disturb the
1530                // caching mechanism.)
1531                if scope.moved_locals.iter().any(|&o| o == local) {
1532                    continue;
1533                }
1534
1535                cfg.push(
1536                    block,
1537                    Statement {
1538                        source_info,
1539                        kind: StatementKind::BackwardIncompatibleDropHint {
1540                            place: Box::new(local.into()),
1541                            reason: BackwardIncompatibleDropReason::Edition2024,
1542                        },
1543                    },
1544                );
1545            }
1546            DropKind::Storage => {
1547                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1548                // need to adjust `unwind_to` on this path. However, in some specific cases
1549                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1550                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1551                // so that any future drops we emit will not register storage-dead.
1552                if storage_dead_on_unwind {
1553                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.local, drop_data.local);
1554                    debug_assert_eq!(unwind_drops.drops[unwind_to].data.kind, drop_data.kind);
1555                    unwind_to = unwind_drops.drops[unwind_to].next;
1556                }
1557                // Only temps and vars need their storage dead.
1558                assert!(local.index() > arg_count);
1559                cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1560            }
1561        }
1562    }
1563    block.unit()
1564}
1565
1566impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1567    /// Build a drop tree for a breakable scope.
1568    ///
1569    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1570    /// loop. Otherwise this is for `break` or `return`.
1571    fn build_exit_tree(
1572        &mut self,
1573        mut drops: DropTree,
1574        else_scope: region::Scope,
1575        span: Span,
1576        continue_block: Option<BasicBlock>,
1577    ) -> Option<BlockAnd<()>> {
1578        let mut blocks = IndexVec::from_elem(None, &drops.drops);
1579        blocks[ROOT_NODE] = continue_block;
1580
1581        drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1582        let is_coroutine = self.coroutine.is_some();
1583
1584        // Link the exit drop tree to unwind drop tree.
1585        if drops.drops.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1586            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1587            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1588            for (drop_idx, drop_node) in drops.drops.iter_enumerated().skip(1) {
1589                match drop_node.data.kind {
1590                    DropKind::Storage | DropKind::ForLint => {
1591                        if is_coroutine {
1592                            let unwind_drop = self
1593                                .scopes
1594                                .unwind_drops
1595                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1596                            unwind_indices.push(unwind_drop);
1597                        } else {
1598                            unwind_indices.push(unwind_indices[drop_node.next]);
1599                        }
1600                    }
1601                    DropKind::Value => {
1602                        let unwind_drop = self
1603                            .scopes
1604                            .unwind_drops
1605                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1606                        self.scopes.unwind_drops.add_entry_point(
1607                            blocks[drop_idx].unwrap(),
1608                            unwind_indices[drop_node.next],
1609                        );
1610                        unwind_indices.push(unwind_drop);
1611                    }
1612                }
1613            }
1614        }
1615        blocks[ROOT_NODE].map(BasicBlock::unit)
1616    }
1617
1618    /// Build the unwind and coroutine drop trees.
1619    pub(crate) fn build_drop_trees(&mut self) {
1620        if self.coroutine.is_some() {
1621            self.build_coroutine_drop_trees();
1622        } else {
1623            Self::build_unwind_tree(
1624                &mut self.cfg,
1625                &mut self.scopes.unwind_drops,
1626                self.fn_span,
1627                &mut None,
1628            );
1629        }
1630    }
1631
1632    fn build_coroutine_drop_trees(&mut self) {
1633        // Build the drop tree for dropping the coroutine while it's suspended.
1634        let drops = &mut self.scopes.coroutine_drops;
1635        let cfg = &mut self.cfg;
1636        let fn_span = self.fn_span;
1637        let mut blocks = IndexVec::from_elem(None, &drops.drops);
1638        drops.build_mir::<CoroutineDrop>(cfg, &mut blocks);
1639        if let Some(root_block) = blocks[ROOT_NODE] {
1640            cfg.terminate(
1641                root_block,
1642                SourceInfo::outermost(fn_span),
1643                TerminatorKind::CoroutineDrop,
1644            );
1645        }
1646
1647        // Build the drop tree for unwinding in the normal control flow paths.
1648        let resume_block = &mut None;
1649        let unwind_drops = &mut self.scopes.unwind_drops;
1650        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1651
1652        // Build the drop tree for unwinding when dropping a suspended
1653        // coroutine.
1654        //
1655        // This is a different tree to the standard unwind paths here to
1656        // prevent drop elaboration from creating drop flags that would have
1657        // to be captured by the coroutine. I'm not sure how important this
1658        // optimization is, but it is here.
1659        for (drop_idx, drop_node) in drops.drops.iter_enumerated() {
1660            if let DropKind::Value = drop_node.data.kind {
1661                debug_assert!(drop_node.next < drops.drops.next_index());
1662                drops.entry_points.push((drop_node.next, blocks[drop_idx].unwrap()));
1663            }
1664        }
1665        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1666    }
1667
1668    fn build_unwind_tree(
1669        cfg: &mut CFG<'tcx>,
1670        drops: &mut DropTree,
1671        fn_span: Span,
1672        resume_block: &mut Option<BasicBlock>,
1673    ) {
1674        let mut blocks = IndexVec::from_elem(None, &drops.drops);
1675        blocks[ROOT_NODE] = *resume_block;
1676        drops.build_mir::<Unwind>(cfg, &mut blocks);
1677        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1678            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
1679
1680            *resume_block = blocks[ROOT_NODE];
1681        }
1682    }
1683}
1684
1685// DropTreeBuilder implementations.
1686
1687struct ExitScopes;
1688
1689impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1690    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1691        cfg.start_new_block()
1692    }
1693    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1694        // There should be an existing terminator with real source info and a
1695        // dummy TerminatorKind. Replace it with a proper goto.
1696        // (The dummy is added by `break_scope` and `break_for_else`.)
1697        let term = cfg.block_data_mut(from).terminator_mut();
1698        if let TerminatorKind::UnwindResume = term.kind {
1699            term.kind = TerminatorKind::Goto { target: to };
1700        } else {
1701            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
1702        }
1703    }
1704}
1705
1706struct CoroutineDrop;
1707
1708impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
1709    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1710        cfg.start_new_block()
1711    }
1712    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1713        let term = cfg.block_data_mut(from).terminator_mut();
1714        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1715            *drop = Some(to);
1716        } else {
1717            span_bug!(
1718                term.source_info.span,
1719                "cannot enter coroutine drop tree from {:?}",
1720                term.kind
1721            )
1722        }
1723    }
1724}
1725
1726struct Unwind;
1727
1728impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1729    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1730        cfg.start_new_cleanup_block()
1731    }
1732    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1733        let term = &mut cfg.block_data_mut(from).terminator_mut();
1734        match &mut term.kind {
1735            TerminatorKind::Drop { unwind, .. } => {
1736                if let UnwindAction::Cleanup(unwind) = *unwind {
1737                    let source_info = term.source_info;
1738                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
1739                } else {
1740                    *unwind = UnwindAction::Cleanup(to);
1741                }
1742            }
1743            TerminatorKind::FalseUnwind { unwind, .. }
1744            | TerminatorKind::Call { unwind, .. }
1745            | TerminatorKind::Assert { unwind, .. }
1746            | TerminatorKind::InlineAsm { unwind, .. } => {
1747                *unwind = UnwindAction::Cleanup(to);
1748            }
1749            TerminatorKind::Goto { .. }
1750            | TerminatorKind::SwitchInt { .. }
1751            | TerminatorKind::UnwindResume
1752            | TerminatorKind::UnwindTerminate(_)
1753            | TerminatorKind::Return
1754            | TerminatorKind::TailCall { .. }
1755            | TerminatorKind::Unreachable
1756            | TerminatorKind::Yield { .. }
1757            | TerminatorKind::CoroutineDrop
1758            | TerminatorKind::FalseEdge { .. } => {
1759                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1760            }
1761        }
1762    }
1763}