Struct rustc_type_ir::search_graph::SearchGraph
source · pub struct SearchGraph<D: Delegate<Cx = X>, X: Cx = X> {
mode: SolverMode,
stack: IndexVec<StackDepth, StackEntry<X>>,
provisional_cache: HashMap<X::Input, Vec<ProvisionalCacheEntry<X>>>,
_marker: PhantomData<D>,
}
Fields§
§mode: SolverMode
§stack: IndexVec<StackDepth, StackEntry<X>>
The stack of goals currently being computed.
An element is deeper in the stack if its index is lower.
provisional_cache: HashMap<X::Input, Vec<ProvisionalCacheEntry<X>>>
The provisional cache contains entries for already computed goals which still depend on goals higher-up in the stack. We don’t move them to the global cache and track them locally instead. A provisional cache entry is only valid until the result of one of its cycle heads changes.
_marker: PhantomData<D>
Implementations§
source§impl<D: Delegate<Cx = X>, X: Cx> SearchGraph<D>
impl<D: Delegate<Cx = X>, X: Cx> SearchGraph<D>
pub fn new(mode: SolverMode) -> SearchGraph<D>
pub fn solver_mode(&self) -> SolverMode
sourcefn update_parent_goal(
cx: X,
stack: &mut IndexVec<StackDepth, StackEntry<X>>,
reached_depth: StackDepth,
heads: &CycleHeads,
encountered_overflow: bool,
nested_goals: &NestedGoals<X>,
)
fn update_parent_goal( cx: X, stack: &mut IndexVec<StackDepth, StackEntry<X>>, reached_depth: StackDepth, heads: &CycleHeads, encountered_overflow: bool, nested_goals: &NestedGoals<X>, )
Lazily update the stack entry for the parent goal. This behavior is shared between actually evaluating goals and using existing global cache entries to make sure they have the same impact on the remaining evaluation.
pub fn is_empty(&self) -> bool
sourcepub fn debug_current_depth(&self) -> usize
pub fn debug_current_depth(&self) -> usize
The number of goals currently in the search graph. This should only be used for debugging purposes.
fn step_kind(cx: X, input: X::Input) -> PathKind
sourcefn stack_path_kind(
cx: X,
stack: &IndexVec<StackDepth, StackEntry<X>>,
head: StackDepth,
) -> PathKind
fn stack_path_kind( cx: X, stack: &IndexVec<StackDepth, StackEntry<X>>, head: StackDepth, ) -> PathKind
Whether the path from head
to the current stack entry is inductive or coinductive.
sourcepub fn with_new_goal(
&mut self,
cx: X,
input: X::Input,
inspect: &mut D::ProofTreeBuilder,
evaluate_goal: impl FnMut(&mut Self, &mut D::ProofTreeBuilder) -> X::Result,
) -> X::Result
pub fn with_new_goal( &mut self, cx: X, input: X::Input, inspect: &mut D::ProofTreeBuilder, evaluate_goal: impl FnMut(&mut Self, &mut D::ProofTreeBuilder) -> X::Result, ) -> X::Result
Probably the most involved method of the whole solver.
Given some goal which is proven via the prove_goal
closure, this
handles caching, overflow, and coinductive cycles.
fn handle_overflow( &mut self, cx: X, input: X::Input, inspect: &mut D::ProofTreeBuilder, ) -> X::Result
sourcefn clear_dependent_provisional_results(&mut self)
fn clear_dependent_provisional_results(&mut self)
When reevaluating a goal with a changed provisional result, all provisional cache entry which depend on this goal get invalidated.
sourcefn rebase_provisional_cache_entries(
&mut self,
cx: X,
stack_entry: &StackEntry<X>,
mutate_result: impl FnMut(X::Input, X::Result) -> X::Result,
)
fn rebase_provisional_cache_entries( &mut self, cx: X, stack_entry: &StackEntry<X>, mutate_result: impl FnMut(X::Input, X::Result) -> X::Result, )
A necessary optimization to handle complex solver cycles. A provisional cache entry relies on a set of cycle heads and the path towards these heads. When popping a cycle head from the stack after we’ve finished computing it, we can’t be sure that the provisional cache entry is still applicable. We need to keep the cache entries to prevent hangs.
What we therefore do is check whether the cycle kind of all cycles the goal of a provisional cache entry is involved in would stay the same when computing the goal without its cycle head on the stack. For more details, see the relevant rustc-dev-guide chapter.
This can be thought of rotating the sub-tree of this provisional result and changing its entry point while making sure that all paths through this sub-tree stay the same.
In case the popped cycle head failed to reach a fixpoint anything which depends on its provisional result is invalid. Actually discarding provisional cache entries in this case would cause hangs, so we instead change the result of dependant provisional cache entries to also be ambiguous. This causes some undesirable ambiguity for nested goals whose result doesn’t actually depend on this cycle head, but that’s acceptable to me.
fn lookup_provisional_cache( &mut self, cx: X, input: X::Input, ) -> Option<X::Result>
sourcefn candidate_is_applicable(
cx: X,
stack: &IndexVec<StackDepth, StackEntry<X>>,
provisional_cache: &HashMap<X::Input, Vec<ProvisionalCacheEntry<X>>>,
nested_goals: &NestedGoals<X>,
) -> bool
fn candidate_is_applicable( cx: X, stack: &IndexVec<StackDepth, StackEntry<X>>, provisional_cache: &HashMap<X::Input, Vec<ProvisionalCacheEntry<X>>>, nested_goals: &NestedGoals<X>, ) -> bool
Even if there is a global cache entry for a given goal, we need to make sure evaluating this entry would not have ended up depending on either a goal already on the stack or a provisional cache entry.
sourcefn lookup_global_cache_untracked(
&self,
cx: X,
input: X::Input,
available_depth: AvailableDepth,
) -> Option<X::Result>
fn lookup_global_cache_untracked( &self, cx: X, input: X::Input, available_depth: AvailableDepth, ) -> Option<X::Result>
Used when fuzzing the global cache. Accesses the global cache without updating the state of the search graph.
sourcefn lookup_global_cache(
&mut self,
cx: X,
input: X::Input,
available_depth: AvailableDepth,
) -> Option<X::Result>
fn lookup_global_cache( &mut self, cx: X, input: X::Input, available_depth: AvailableDepth, ) -> Option<X::Result>
Try to fetch a previously computed result from the global cache, making sure to only do so if it would match the result of reevaluating this goal.
fn check_cycle_on_stack(&mut self, cx: X, input: X::Input) -> Option<X::Result>
sourcefn reached_fixpoint(
&mut self,
cx: X,
stack_entry: &StackEntry<X>,
usage_kind: UsageKind,
result: X::Result,
) -> bool
fn reached_fixpoint( &mut self, cx: X, stack_entry: &StackEntry<X>, usage_kind: UsageKind, result: X::Result, ) -> bool
Whether we’ve reached a fixpoint when evaluating a cycle head.
sourcefn evaluate_goal_in_task(
&mut self,
cx: X,
input: X::Input,
inspect: &mut D::ProofTreeBuilder,
evaluate_goal: impl FnMut(&mut Self, &mut D::ProofTreeBuilder) -> X::Result,
) -> (StackEntry<X>, X::Result)
fn evaluate_goal_in_task( &mut self, cx: X, input: X::Input, inspect: &mut D::ProofTreeBuilder, evaluate_goal: impl FnMut(&mut Self, &mut D::ProofTreeBuilder) -> X::Result, ) -> (StackEntry<X>, X::Result)
When we encounter a coinductive cycle, we have to fetch the result of that cycle while we are still computing it. Because of this we continuously recompute the cycle until the result of the previous iteration is equal to the final result, at which point we are done.
sourcefn insert_global_cache(
&mut self,
cx: X,
input: X::Input,
final_entry: StackEntry<X>,
result: X::Result,
dep_node: X::DepNodeIndex,
)
fn insert_global_cache( &mut self, cx: X, input: X::Input, final_entry: StackEntry<X>, result: X::Result, dep_node: X::DepNodeIndex, )
When encountering a cycle, both inductive and coinductive, we only move the root into the global cache. We also store all other cycle participants involved.
We must not use the global cache entry of a root goal if a cycle
participant is on the stack. This is necessary to prevent unstable
results. See the comment of StackEntry::nested_goals
for
more details.
Auto Trait Implementations§
impl<D, X> Freeze for SearchGraph<D, X>
impl<D, X> RefUnwindSafe for SearchGraph<D, X>
impl<D, X> Send for SearchGraph<D, X>
impl<D, X> Sync for SearchGraph<D, X>
impl<D, X> Unpin for SearchGraph<D, X>
impl<D, X> UnwindSafe for SearchGraph<D, X>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T, R> CollectAndApply<T, R> for T
impl<T, R> CollectAndApply<T, R> for T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
source§impl<I, T> UpcastFrom<I, T> for T
impl<I, T> UpcastFrom<I, T> for T
fn upcast_from(from: T, _tcx: I) -> T
source§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
source§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
source§fn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
impl<'a, T> Captures<'a> for Twhere
T: ?Sized,
Layout§
Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...)
attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.
Size: 64 bytes