Skip to main content

rustc_data_structures/
profiling.rs

1//! # Rust Compiler Self-Profiling
2//!
3//! This module implements the basic framework for the compiler's self-
4//! profiling support. It provides the `SelfProfiler` type which enables
5//! recording "events". An event is something that starts and ends at a given
6//! point in time and has an ID and a kind attached to it. This allows for
7//! tracing the compiler's activity.
8//!
9//! Internally this module uses the custom tailored [measureme][mm] crate for
10//! efficiently recording events to disk in a compact format that can be
11//! post-processed and analyzed by the suite of tools in the `measureme`
12//! project. The highest priority for the tracing framework is on incurring as
13//! little overhead as possible.
14//!
15//!
16//! ## Event Overview
17//!
18//! Events have a few properties:
19//!
20//! - The `event_kind` designates the broad category of an event (e.g. does it
21//!   correspond to the execution of a query provider or to loading something
22//!   from the incr. comp. on-disk cache, etc).
23//! - The `event_id` designates the query invocation or function call it
24//!   corresponds to, possibly including the query key or function arguments.
25//! - Each event stores the ID of the thread it was recorded on.
26//! - The timestamp stores beginning and end of the event, or the single point
27//!   in time it occurred at for "instant" events.
28//!
29//!
30//! ## Event Filtering
31//!
32//! Event generation can be filtered by event kind. Recording all possible
33//! events generates a lot of data, much of which is not needed for most kinds
34//! of analysis. So, in order to keep overhead as low as possible for a given
35//! use case, the `SelfProfiler` will only record the kinds of events that
36//! pass the filter specified as a command line argument to the compiler.
37//!
38//!
39//! ## `event_id` Assignment
40//!
41//! As far as `measureme` is concerned, `event_id`s are just strings. However,
42//! it would incur too much overhead to generate and persist each `event_id`
43//! string at the point where the event is recorded. In order to make this more
44//! efficient `measureme` has two features:
45//!
46//! - Strings can share their content, so that re-occurring parts don't have to
47//!   be copied over and over again. One allocates a string in `measureme` and
48//!   gets back a `StringId`. This `StringId` is then used to refer to that
49//!   string. `measureme` strings are actually DAGs of string components so that
50//!   arbitrary sharing of substrings can be done efficiently. This is useful
51//!   because `event_id`s contain lots of redundant text like query names or
52//!   def-path components.
53//!
54//! - `StringId`s can be "virtual" which means that the client picks a numeric
55//!   ID according to some application-specific scheme and can later make that
56//!   ID be mapped to an actual string. This is used to cheaply generate
57//!   `event_id`s while the events actually occur, causing little timing
58//!   distortion, and then later map those `StringId`s, in bulk, to actual
59//!   `event_id` strings. This way the largest part of the tracing overhead is
60//!   localized to one contiguous chunk of time.
61//!
62//! How are these `event_id`s generated in the compiler? For things that occur
63//! infrequently (e.g. "generic activities"), we just allocate the string the
64//! first time it is used and then keep the `StringId` in a hash table. This
65//! is implemented in `SelfProfiler::get_or_alloc_cached_string()`.
66//!
67//! For queries it gets more interesting: First we need a unique numeric ID for
68//! each query invocation (the `QueryInvocationId`). This ID is used as the
69//! virtual `StringId` we use as `event_id` for a given event. This ID has to
70//! be available both when the query is executed and later, together with the
71//! query key, when we allocate the actual `event_id` strings in bulk.
72//!
73//! We could make the compiler generate and keep track of such an ID for each
74//! query invocation but luckily we already have something that fits all the
75//! the requirements: the query's `DepNodeIndex`. So we use the numeric value
76//! of the `DepNodeIndex` as `event_id` when recording the event and then,
77//! just before the query context is dropped, we walk the entire query cache
78//! (which stores the `DepNodeIndex` along with the query key for each
79//! invocation) and allocate the corresponding strings together with a mapping
80//! for `DepNodeIndex as StringId`.
81//!
82//! [mm]: https://github.com/rust-lang/measureme/
83
84use std::borrow::Borrow;
85use std::collections::hash_map::Entry;
86use std::error::Error;
87use std::fmt::Display;
88use std::path::Path;
89use std::sync::Arc;
90use std::sync::atomic::Ordering;
91use std::time::{Duration, Instant};
92use std::{fs, hint, process};
93
94pub use measureme::EventId;
95use measureme::{EventIdBuilder, Profiler, SerializableString, StringId};
96use parking_lot::RwLock;
97use smallvec::SmallVec;
98use tracing::warn;
99
100use crate::fx::FxHashMap;
101use crate::outline;
102use crate::sync::AtomicU64;
103
104bitflags::bitflags! {
105    #[derive(#[automatically_derived]
impl ::core::clone::Clone for EventFilter {
    #[inline]
    fn clone(&self) -> EventFilter {
        let _:
                ::core::clone::AssertParamIsClone<<EventFilter as
                ::bitflags::__private::PublicFlags>::Internal>;
        *self
    }
}
impl EventFilter {
    #[allow(deprecated, non_upper_case_globals,)]
    pub const GENERIC_ACTIVITIES: Self = Self::from_bits_retain(1 << 0);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const QUERY_PROVIDERS: Self = Self::from_bits_retain(1 << 1);
    #[doc =
    r" Store detailed instant events, including timestamp and thread ID,"]
    #[doc = r" per each query cache hit. Note that this is quite expensive."]
    #[allow(deprecated, non_upper_case_globals,)]
    pub const QUERY_CACHE_HITS: Self = Self::from_bits_retain(1 << 2);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const QUERY_BLOCKED: Self = Self::from_bits_retain(1 << 3);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const INCR_CACHE_LOADS: Self = Self::from_bits_retain(1 << 4);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const QUERY_KEYS: Self = Self::from_bits_retain(1 << 5);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const FUNCTION_ARGS: Self = Self::from_bits_retain(1 << 6);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const LLVM: Self = Self::from_bits_retain(1 << 7);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const INCR_RESULT_HASHING: Self = Self::from_bits_retain(1 << 8);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const ARTIFACT_SIZES: Self = Self::from_bits_retain(1 << 9);
    #[doc = r" Store aggregated counts of cache hits per query invocation."]
    #[allow(deprecated, non_upper_case_globals,)]
    pub const QUERY_CACHE_HIT_COUNTS: Self = Self::from_bits_retain(1 << 10);
    #[allow(deprecated, non_upper_case_globals,)]
    pub const DEFAULT: Self =
        Self::from_bits_retain(Self::GENERIC_ACTIVITIES.bits() |
                                    Self::QUERY_PROVIDERS.bits() | Self::QUERY_BLOCKED.bits() |
                            Self::INCR_CACHE_LOADS.bits() |
                        Self::INCR_RESULT_HASHING.bits() |
                    Self::ARTIFACT_SIZES.bits() |
                Self::QUERY_CACHE_HIT_COUNTS.bits());
    #[allow(deprecated, non_upper_case_globals,)]
    pub const ARGS: Self =
        Self::from_bits_retain(Self::QUERY_KEYS.bits() |
                Self::FUNCTION_ARGS.bits());
    #[allow(deprecated, non_upper_case_globals,)]
    pub const QUERY_CACHE_HIT_COMBINED: Self =
        Self::from_bits_retain(Self::QUERY_CACHE_HITS.bits() |
                Self::QUERY_CACHE_HIT_COUNTS.bits());
}
impl ::bitflags::Flags for EventFilter {
    const FLAGS: &'static [::bitflags::Flag<EventFilter>] =
        &[{

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("GENERIC_ACTIVITIES",
                            EventFilter::GENERIC_ACTIVITIES)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("QUERY_PROVIDERS",
                            EventFilter::QUERY_PROVIDERS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("QUERY_CACHE_HITS",
                            EventFilter::QUERY_CACHE_HITS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("QUERY_BLOCKED",
                            EventFilter::QUERY_BLOCKED)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("INCR_CACHE_LOADS",
                            EventFilter::INCR_CACHE_LOADS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("QUERY_KEYS", EventFilter::QUERY_KEYS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("FUNCTION_ARGS",
                            EventFilter::FUNCTION_ARGS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("LLVM", EventFilter::LLVM)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("INCR_RESULT_HASHING",
                            EventFilter::INCR_RESULT_HASHING)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("ARTIFACT_SIZES",
                            EventFilter::ARTIFACT_SIZES)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("QUERY_CACHE_HIT_COUNTS",
                            EventFilter::QUERY_CACHE_HIT_COUNTS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("DEFAULT", EventFilter::DEFAULT)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("ARGS", EventFilter::ARGS)
                    },
                    {

                        #[allow(deprecated, non_upper_case_globals,)]
                        ::bitflags::Flag::new("QUERY_CACHE_HIT_COMBINED",
                            EventFilter::QUERY_CACHE_HIT_COMBINED)
                    }];
    type Bits = u16;
    fn bits(&self) -> u16 { EventFilter::bits(self) }
    fn from_bits_retain(bits: u16) -> EventFilter {
        EventFilter::from_bits_retain(bits)
    }
}
#[allow(dead_code, deprecated, unused_doc_comments, unused_attributes,
unused_mut, unused_imports, non_upper_case_globals, clippy ::
assign_op_pattern, clippy :: indexing_slicing, clippy :: same_name_method,
clippy :: iter_without_into_iter,)]
const _: () =
    {
        #[repr(transparent)]
        struct InternalBitFlags(u16);
        #[automatically_derived]
        #[doc(hidden)]
        unsafe impl ::core::clone::TrivialClone for InternalBitFlags { }
        #[automatically_derived]
        impl ::core::clone::Clone for InternalBitFlags {
            #[inline]
            fn clone(&self) -> InternalBitFlags {
                let _: ::core::clone::AssertParamIsClone<u16>;
                *self
            }
        }
        #[automatically_derived]
        impl ::core::marker::Copy for InternalBitFlags { }
        #[automatically_derived]
        impl ::core::marker::StructuralPartialEq for InternalBitFlags { }
        #[automatically_derived]
        impl ::core::cmp::PartialEq for InternalBitFlags {
            #[inline]
            fn eq(&self, other: &InternalBitFlags) -> bool {
                self.0 == other.0
            }
        }
        #[automatically_derived]
        impl ::core::cmp::Eq for InternalBitFlags {
            #[inline]
            #[doc(hidden)]
            #[coverage(off)]
            fn assert_receiver_is_total_eq(&self) {
                let _: ::core::cmp::AssertParamIsEq<u16>;
            }
        }
        #[automatically_derived]
        impl ::core::cmp::PartialOrd for InternalBitFlags {
            #[inline]
            fn partial_cmp(&self, other: &InternalBitFlags)
                -> ::core::option::Option<::core::cmp::Ordering> {
                ::core::cmp::PartialOrd::partial_cmp(&self.0, &other.0)
            }
        }
        #[automatically_derived]
        impl ::core::cmp::Ord for InternalBitFlags {
            #[inline]
            fn cmp(&self, other: &InternalBitFlags) -> ::core::cmp::Ordering {
                ::core::cmp::Ord::cmp(&self.0, &other.0)
            }
        }
        #[automatically_derived]
        impl ::core::hash::Hash for InternalBitFlags {
            #[inline]
            fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) {
                ::core::hash::Hash::hash(&self.0, state)
            }
        }
        impl ::bitflags::__private::PublicFlags for EventFilter {
            type Primitive = u16;
            type Internal = InternalBitFlags;
        }
        impl ::bitflags::__private::core::default::Default for
            InternalBitFlags {
            #[inline]
            fn default() -> Self { InternalBitFlags::empty() }
        }
        impl ::bitflags::__private::core::fmt::Debug for InternalBitFlags {
            fn fmt(&self,
                f: &mut ::bitflags::__private::core::fmt::Formatter<'_>)
                -> ::bitflags::__private::core::fmt::Result {
                if self.is_empty() {
                    f.write_fmt(format_args!("{0:#x}",
                            <u16 as ::bitflags::Bits>::EMPTY))
                } else {
                    ::bitflags::__private::core::fmt::Display::fmt(self, f)
                }
            }
        }
        impl ::bitflags::__private::core::fmt::Display for InternalBitFlags {
            fn fmt(&self,
                f: &mut ::bitflags::__private::core::fmt::Formatter<'_>)
                -> ::bitflags::__private::core::fmt::Result {
                ::bitflags::parser::to_writer(&EventFilter(*self), f)
            }
        }
        impl ::bitflags::__private::core::str::FromStr for InternalBitFlags {
            type Err = ::bitflags::parser::ParseError;
            fn from_str(s: &str)
                ->
                    ::bitflags::__private::core::result::Result<Self,
                    Self::Err> {
                ::bitflags::parser::from_str::<EventFilter>(s).map(|flags|
                        flags.0)
            }
        }
        impl ::bitflags::__private::core::convert::AsRef<u16> for
            InternalBitFlags {
            fn as_ref(&self) -> &u16 { &self.0 }
        }
        impl ::bitflags::__private::core::convert::From<u16> for
            InternalBitFlags {
            fn from(bits: u16) -> Self { Self::from_bits_retain(bits) }
        }
        #[allow(dead_code, deprecated, unused_attributes)]
        impl InternalBitFlags {
            /// Get a flags value with all bits unset.
            #[inline]
            pub const fn empty() -> Self {
                Self(<u16 as ::bitflags::Bits>::EMPTY)
            }
            /// Get a flags value with all known bits set.
            #[inline]
            pub const fn all() -> Self {
                let mut truncated = <u16 as ::bitflags::Bits>::EMPTY;
                let mut i = 0;
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                {
                    {
                        let flag =
                            <EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
                        truncated = truncated | flag;
                        i += 1;
                    }
                };
                let _ = i;
                Self(truncated)
            }
            /// Get the underlying bits value.
            ///
            /// The returned value is exactly the bits set in this flags value.
            #[inline]
            pub const fn bits(&self) -> u16 { self.0 }
            /// Convert from a bits value.
            ///
            /// This method will return `None` if any unknown bits are set.
            #[inline]
            pub const fn from_bits(bits: u16)
                -> ::bitflags::__private::core::option::Option<Self> {
                let truncated = Self::from_bits_truncate(bits).0;
                if truncated == bits {
                    ::bitflags::__private::core::option::Option::Some(Self(bits))
                } else { ::bitflags::__private::core::option::Option::None }
            }
            /// Convert from a bits value, unsetting any unknown bits.
            #[inline]
            pub const fn from_bits_truncate(bits: u16) -> Self {
                Self(bits & Self::all().0)
            }
            /// Convert from a bits value exactly.
            #[inline]
            pub const fn from_bits_retain(bits: u16) -> Self { Self(bits) }
            /// Get a flags value with the bits of a flag with the given name set.
            ///
            /// This method will return `None` if `name` is empty or doesn't
            /// correspond to any named flag.
            #[inline]
            pub fn from_name(name: &str)
                -> ::bitflags::__private::core::option::Option<Self> {
                {
                    if name == "GENERIC_ACTIVITIES" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::GENERIC_ACTIVITIES.bits()));
                    }
                };
                ;
                {
                    if name == "QUERY_PROVIDERS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_PROVIDERS.bits()));
                    }
                };
                ;
                {
                    if name == "QUERY_CACHE_HITS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HITS.bits()));
                    }
                };
                ;
                {
                    if name == "QUERY_BLOCKED" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_BLOCKED.bits()));
                    }
                };
                ;
                {
                    if name == "INCR_CACHE_LOADS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::INCR_CACHE_LOADS.bits()));
                    }
                };
                ;
                {
                    if name == "QUERY_KEYS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_KEYS.bits()));
                    }
                };
                ;
                {
                    if name == "FUNCTION_ARGS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::FUNCTION_ARGS.bits()));
                    }
                };
                ;
                {
                    if name == "LLVM" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::LLVM.bits()));
                    }
                };
                ;
                {
                    if name == "INCR_RESULT_HASHING" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::INCR_RESULT_HASHING.bits()));
                    }
                };
                ;
                {
                    if name == "ARTIFACT_SIZES" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::ARTIFACT_SIZES.bits()));
                    }
                };
                ;
                {
                    if name == "QUERY_CACHE_HIT_COUNTS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HIT_COUNTS.bits()));
                    }
                };
                ;
                {
                    if name == "DEFAULT" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::DEFAULT.bits()));
                    }
                };
                ;
                {
                    if name == "ARGS" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::ARGS.bits()));
                    }
                };
                ;
                {
                    if name == "QUERY_CACHE_HIT_COMBINED" {
                        return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HIT_COMBINED.bits()));
                    }
                };
                ;
                let _ = name;
                ::bitflags::__private::core::option::Option::None
            }
            /// Whether all bits in this flags value are unset.
            #[inline]
            pub const fn is_empty(&self) -> bool {
                self.0 == <u16 as ::bitflags::Bits>::EMPTY
            }
            /// Whether all known bits in this flags value are set.
            #[inline]
            pub const fn is_all(&self) -> bool {
                Self::all().0 | self.0 == self.0
            }
            /// Whether any set bits in a source flags value are also set in a target flags value.
            #[inline]
            pub const fn intersects(&self, other: Self) -> bool {
                self.0 & other.0 != <u16 as ::bitflags::Bits>::EMPTY
            }
            /// Whether all set bits in a source flags value are also set in a target flags value.
            #[inline]
            pub const fn contains(&self, other: Self) -> bool {
                self.0 & other.0 == other.0
            }
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            pub fn insert(&mut self, other: Self) {
                *self = Self(self.0).union(other);
            }
            /// The intersection of a source flags value with the complement of a target flags
            /// value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `remove` won't truncate `other`, but the `!` operator will.
            #[inline]
            pub fn remove(&mut self, other: Self) {
                *self = Self(self.0).difference(other);
            }
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            pub fn toggle(&mut self, other: Self) {
                *self = Self(self.0).symmetric_difference(other);
            }
            /// Call `insert` when `value` is `true` or `remove` when `value` is `false`.
            #[inline]
            pub fn set(&mut self, other: Self, value: bool) {
                if value { self.insert(other); } else { self.remove(other); }
            }
            /// The bitwise and (`&`) of the bits in two flags values.
            #[inline]
            #[must_use]
            pub const fn intersection(self, other: Self) -> Self {
                Self(self.0 & other.0)
            }
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            #[must_use]
            pub const fn union(self, other: Self) -> Self {
                Self(self.0 | other.0)
            }
            /// The intersection of a source flags value with the complement of a target flags
            /// value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `difference` won't truncate `other`, but the `!` operator will.
            #[inline]
            #[must_use]
            pub const fn difference(self, other: Self) -> Self {
                Self(self.0 & !other.0)
            }
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            #[must_use]
            pub const fn symmetric_difference(self, other: Self) -> Self {
                Self(self.0 ^ other.0)
            }
            /// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
            #[inline]
            #[must_use]
            pub const fn complement(self) -> Self {
                Self::from_bits_truncate(!self.0)
            }
        }
        impl ::bitflags::__private::core::fmt::Binary for InternalBitFlags {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::Binary::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::fmt::Octal for InternalBitFlags {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::Octal::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::fmt::LowerHex for InternalBitFlags {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::LowerHex::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::fmt::UpperHex for InternalBitFlags {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::UpperHex::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::ops::BitOr for InternalBitFlags {
            type Output = Self;
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            fn bitor(self, other: InternalBitFlags) -> Self {
                self.union(other)
            }
        }
        impl ::bitflags::__private::core::ops::BitOrAssign for
            InternalBitFlags {
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            fn bitor_assign(&mut self, other: Self) { self.insert(other); }
        }
        impl ::bitflags::__private::core::ops::BitXor for InternalBitFlags {
            type Output = Self;
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            fn bitxor(self, other: Self) -> Self {
                self.symmetric_difference(other)
            }
        }
        impl ::bitflags::__private::core::ops::BitXorAssign for
            InternalBitFlags {
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            fn bitxor_assign(&mut self, other: Self) { self.toggle(other); }
        }
        impl ::bitflags::__private::core::ops::BitAnd for InternalBitFlags {
            type Output = Self;
            /// The bitwise and (`&`) of the bits in two flags values.
            #[inline]
            fn bitand(self, other: Self) -> Self { self.intersection(other) }
        }
        impl ::bitflags::__private::core::ops::BitAndAssign for
            InternalBitFlags {
            /// The bitwise and (`&`) of the bits in two flags values.
            #[inline]
            fn bitand_assign(&mut self, other: Self) {
                *self =
                    Self::from_bits_retain(self.bits()).intersection(other);
            }
        }
        impl ::bitflags::__private::core::ops::Sub for InternalBitFlags {
            type Output = Self;
            /// The intersection of a source flags value with the complement of a target flags value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `difference` won't truncate `other`, but the `!` operator will.
            #[inline]
            fn sub(self, other: Self) -> Self { self.difference(other) }
        }
        impl ::bitflags::__private::core::ops::SubAssign for InternalBitFlags
            {
            /// The intersection of a source flags value with the complement of a target flags value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `difference` won't truncate `other`, but the `!` operator will.
            #[inline]
            fn sub_assign(&mut self, other: Self) { self.remove(other); }
        }
        impl ::bitflags::__private::core::ops::Not for InternalBitFlags {
            type Output = Self;
            /// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
            #[inline]
            fn not(self) -> Self { self.complement() }
        }
        impl ::bitflags::__private::core::iter::Extend<InternalBitFlags> for
            InternalBitFlags {
            /// The bitwise or (`|`) of the bits in each flags value.
            fn extend<T: ::bitflags::__private::core::iter::IntoIterator<Item
                = Self>>(&mut self, iterator: T) {
                for item in iterator { self.insert(item) }
            }
        }
        impl ::bitflags::__private::core::iter::FromIterator<InternalBitFlags>
            for InternalBitFlags {
            /// The bitwise or (`|`) of the bits in each flags value.
            fn from_iter<T: ::bitflags::__private::core::iter::IntoIterator<Item
                = Self>>(iterator: T) -> Self {
                use ::bitflags::__private::core::iter::Extend;
                let mut result = Self::empty();
                result.extend(iterator);
                result
            }
        }
        impl InternalBitFlags {
            /// Yield a set of contained flags values.
            ///
            /// Each yielded flags value will correspond to a defined named flag. Any unknown bits
            /// will be yielded together as a final flags value.
            #[inline]
            pub const fn iter(&self) -> ::bitflags::iter::Iter<EventFilter> {
                ::bitflags::iter::Iter::__private_const_new(<EventFilter as
                        ::bitflags::Flags>::FLAGS,
                    EventFilter::from_bits_retain(self.bits()),
                    EventFilter::from_bits_retain(self.bits()))
            }
            /// Yield a set of contained named flags values.
            ///
            /// This method is like [`iter`](#method.iter), except only yields bits in contained named flags.
            /// Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
            #[inline]
            pub const fn iter_names(&self)
                -> ::bitflags::iter::IterNames<EventFilter> {
                ::bitflags::iter::IterNames::__private_const_new(<EventFilter
                        as ::bitflags::Flags>::FLAGS,
                    EventFilter::from_bits_retain(self.bits()),
                    EventFilter::from_bits_retain(self.bits()))
            }
        }
        impl ::bitflags::__private::core::iter::IntoIterator for
            InternalBitFlags {
            type Item = EventFilter;
            type IntoIter = ::bitflags::iter::Iter<EventFilter>;
            fn into_iter(self) -> Self::IntoIter { self.iter() }
        }
        impl InternalBitFlags {
            /// Returns a mutable reference to the raw value of the flags currently stored.
            #[inline]
            pub fn bits_mut(&mut self) -> &mut u16 { &mut self.0 }
        }
        #[allow(dead_code, deprecated, unused_attributes)]
        impl EventFilter {
            /// Get a flags value with all bits unset.
            #[inline]
            pub const fn empty() -> Self { Self(InternalBitFlags::empty()) }
            /// Get a flags value with all known bits set.
            #[inline]
            pub const fn all() -> Self { Self(InternalBitFlags::all()) }
            /// Get the underlying bits value.
            ///
            /// The returned value is exactly the bits set in this flags value.
            #[inline]
            pub const fn bits(&self) -> u16 { self.0.bits() }
            /// Convert from a bits value.
            ///
            /// This method will return `None` if any unknown bits are set.
            #[inline]
            pub const fn from_bits(bits: u16)
                -> ::bitflags::__private::core::option::Option<Self> {
                match InternalBitFlags::from_bits(bits) {
                    ::bitflags::__private::core::option::Option::Some(bits) =>
                        ::bitflags::__private::core::option::Option::Some(Self(bits)),
                    ::bitflags::__private::core::option::Option::None =>
                        ::bitflags::__private::core::option::Option::None,
                }
            }
            /// Convert from a bits value, unsetting any unknown bits.
            #[inline]
            pub const fn from_bits_truncate(bits: u16) -> Self {
                Self(InternalBitFlags::from_bits_truncate(bits))
            }
            /// Convert from a bits value exactly.
            #[inline]
            pub const fn from_bits_retain(bits: u16) -> Self {
                Self(InternalBitFlags::from_bits_retain(bits))
            }
            /// Get a flags value with the bits of a flag with the given name set.
            ///
            /// This method will return `None` if `name` is empty or doesn't
            /// correspond to any named flag.
            #[inline]
            pub fn from_name(name: &str)
                -> ::bitflags::__private::core::option::Option<Self> {
                match InternalBitFlags::from_name(name) {
                    ::bitflags::__private::core::option::Option::Some(bits) =>
                        ::bitflags::__private::core::option::Option::Some(Self(bits)),
                    ::bitflags::__private::core::option::Option::None =>
                        ::bitflags::__private::core::option::Option::None,
                }
            }
            /// Whether all bits in this flags value are unset.
            #[inline]
            pub const fn is_empty(&self) -> bool { self.0.is_empty() }
            /// Whether all known bits in this flags value are set.
            #[inline]
            pub const fn is_all(&self) -> bool { self.0.is_all() }
            /// Whether any set bits in a source flags value are also set in a target flags value.
            #[inline]
            pub const fn intersects(&self, other: Self) -> bool {
                self.0.intersects(other.0)
            }
            /// Whether all set bits in a source flags value are also set in a target flags value.
            #[inline]
            pub const fn contains(&self, other: Self) -> bool {
                self.0.contains(other.0)
            }
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            pub fn insert(&mut self, other: Self) { self.0.insert(other.0) }
            /// The intersection of a source flags value with the complement of a target flags
            /// value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `remove` won't truncate `other`, but the `!` operator will.
            #[inline]
            pub fn remove(&mut self, other: Self) { self.0.remove(other.0) }
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            pub fn toggle(&mut self, other: Self) { self.0.toggle(other.0) }
            /// Call `insert` when `value` is `true` or `remove` when `value` is `false`.
            #[inline]
            pub fn set(&mut self, other: Self, value: bool) {
                self.0.set(other.0, value)
            }
            /// The bitwise and (`&`) of the bits in two flags values.
            #[inline]
            #[must_use]
            pub const fn intersection(self, other: Self) -> Self {
                Self(self.0.intersection(other.0))
            }
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            #[must_use]
            pub const fn union(self, other: Self) -> Self {
                Self(self.0.union(other.0))
            }
            /// The intersection of a source flags value with the complement of a target flags
            /// value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `difference` won't truncate `other`, but the `!` operator will.
            #[inline]
            #[must_use]
            pub const fn difference(self, other: Self) -> Self {
                Self(self.0.difference(other.0))
            }
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            #[must_use]
            pub const fn symmetric_difference(self, other: Self) -> Self {
                Self(self.0.symmetric_difference(other.0))
            }
            /// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
            #[inline]
            #[must_use]
            pub const fn complement(self) -> Self {
                Self(self.0.complement())
            }
        }
        impl ::bitflags::__private::core::fmt::Binary for EventFilter {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::Binary::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::fmt::Octal for EventFilter {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::Octal::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::fmt::LowerHex for EventFilter {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::LowerHex::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::fmt::UpperHex for EventFilter {
            fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
                -> ::bitflags::__private::core::fmt::Result {
                let inner = self.0;
                ::bitflags::__private::core::fmt::UpperHex::fmt(&inner, f)
            }
        }
        impl ::bitflags::__private::core::ops::BitOr for EventFilter {
            type Output = Self;
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            fn bitor(self, other: EventFilter) -> Self { self.union(other) }
        }
        impl ::bitflags::__private::core::ops::BitOrAssign for EventFilter {
            /// The bitwise or (`|`) of the bits in two flags values.
            #[inline]
            fn bitor_assign(&mut self, other: Self) { self.insert(other); }
        }
        impl ::bitflags::__private::core::ops::BitXor for EventFilter {
            type Output = Self;
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            fn bitxor(self, other: Self) -> Self {
                self.symmetric_difference(other)
            }
        }
        impl ::bitflags::__private::core::ops::BitXorAssign for EventFilter {
            /// The bitwise exclusive-or (`^`) of the bits in two flags values.
            #[inline]
            fn bitxor_assign(&mut self, other: Self) { self.toggle(other); }
        }
        impl ::bitflags::__private::core::ops::BitAnd for EventFilter {
            type Output = Self;
            /// The bitwise and (`&`) of the bits in two flags values.
            #[inline]
            fn bitand(self, other: Self) -> Self { self.intersection(other) }
        }
        impl ::bitflags::__private::core::ops::BitAndAssign for EventFilter {
            /// The bitwise and (`&`) of the bits in two flags values.
            #[inline]
            fn bitand_assign(&mut self, other: Self) {
                *self =
                    Self::from_bits_retain(self.bits()).intersection(other);
            }
        }
        impl ::bitflags::__private::core::ops::Sub for EventFilter {
            type Output = Self;
            /// The intersection of a source flags value with the complement of a target flags value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `difference` won't truncate `other`, but the `!` operator will.
            #[inline]
            fn sub(self, other: Self) -> Self { self.difference(other) }
        }
        impl ::bitflags::__private::core::ops::SubAssign for EventFilter {
            /// The intersection of a source flags value with the complement of a target flags value (`&!`).
            ///
            /// This method is not equivalent to `self & !other` when `other` has unknown bits set.
            /// `difference` won't truncate `other`, but the `!` operator will.
            #[inline]
            fn sub_assign(&mut self, other: Self) { self.remove(other); }
        }
        impl ::bitflags::__private::core::ops::Not for EventFilter {
            type Output = Self;
            /// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
            #[inline]
            fn not(self) -> Self { self.complement() }
        }
        impl ::bitflags::__private::core::iter::Extend<EventFilter> for
            EventFilter {
            /// The bitwise or (`|`) of the bits in each flags value.
            fn extend<T: ::bitflags::__private::core::iter::IntoIterator<Item
                = Self>>(&mut self, iterator: T) {
                for item in iterator { self.insert(item) }
            }
        }
        impl ::bitflags::__private::core::iter::FromIterator<EventFilter> for
            EventFilter {
            /// The bitwise or (`|`) of the bits in each flags value.
            fn from_iter<T: ::bitflags::__private::core::iter::IntoIterator<Item
                = Self>>(iterator: T) -> Self {
                use ::bitflags::__private::core::iter::Extend;
                let mut result = Self::empty();
                result.extend(iterator);
                result
            }
        }
        impl EventFilter {
            /// Yield a set of contained flags values.
            ///
            /// Each yielded flags value will correspond to a defined named flag. Any unknown bits
            /// will be yielded together as a final flags value.
            #[inline]
            pub const fn iter(&self) -> ::bitflags::iter::Iter<EventFilter> {
                ::bitflags::iter::Iter::__private_const_new(<EventFilter as
                        ::bitflags::Flags>::FLAGS,
                    EventFilter::from_bits_retain(self.bits()),
                    EventFilter::from_bits_retain(self.bits()))
            }
            /// Yield a set of contained named flags values.
            ///
            /// This method is like [`iter`](#method.iter), except only yields bits in contained named flags.
            /// Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
            #[inline]
            pub const fn iter_names(&self)
                -> ::bitflags::iter::IterNames<EventFilter> {
                ::bitflags::iter::IterNames::__private_const_new(<EventFilter
                        as ::bitflags::Flags>::FLAGS,
                    EventFilter::from_bits_retain(self.bits()),
                    EventFilter::from_bits_retain(self.bits()))
            }
        }
        impl ::bitflags::__private::core::iter::IntoIterator for EventFilter {
            type Item = EventFilter;
            type IntoIter = ::bitflags::iter::Iter<EventFilter>;
            fn into_iter(self) -> Self::IntoIter { self.iter() }
        }
    };Clone, #[automatically_derived]
impl ::core::marker::Copy for EventFilter { }Copy)]
106    struct EventFilter: u16 {
107        const GENERIC_ACTIVITIES  = 1 << 0;
108        const QUERY_PROVIDERS     = 1 << 1;
109        /// Store detailed instant events, including timestamp and thread ID,
110        /// per each query cache hit. Note that this is quite expensive.
111        const QUERY_CACHE_HITS    = 1 << 2;
112        const QUERY_BLOCKED       = 1 << 3;
113        const INCR_CACHE_LOADS    = 1 << 4;
114
115        const QUERY_KEYS          = 1 << 5;
116        const FUNCTION_ARGS       = 1 << 6;
117        const LLVM                = 1 << 7;
118        const INCR_RESULT_HASHING = 1 << 8;
119        const ARTIFACT_SIZES      = 1 << 9;
120        /// Store aggregated counts of cache hits per query invocation.
121        const QUERY_CACHE_HIT_COUNTS  = 1 << 10;
122
123        const DEFAULT = Self::GENERIC_ACTIVITIES.bits() |
124                        Self::QUERY_PROVIDERS.bits() |
125                        Self::QUERY_BLOCKED.bits() |
126                        Self::INCR_CACHE_LOADS.bits() |
127                        Self::INCR_RESULT_HASHING.bits() |
128                        Self::ARTIFACT_SIZES.bits() |
129                        Self::QUERY_CACHE_HIT_COUNTS.bits();
130
131        const ARGS = Self::QUERY_KEYS.bits() | Self::FUNCTION_ARGS.bits();
132        const QUERY_CACHE_HIT_COMBINED = Self::QUERY_CACHE_HITS.bits() | Self::QUERY_CACHE_HIT_COUNTS.bits();
133    }
134}
135
136// keep this in sync with the `-Z self-profile-events` help message in rustc_session/options.rs
137const EVENT_FILTERS_BY_NAME: &[(&str, EventFilter)] = &[
138    ("none", EventFilter::empty()),
139    ("all", EventFilter::all()),
140    ("default", EventFilter::DEFAULT),
141    ("generic-activity", EventFilter::GENERIC_ACTIVITIES),
142    ("query-provider", EventFilter::QUERY_PROVIDERS),
143    ("query-cache-hit", EventFilter::QUERY_CACHE_HITS),
144    ("query-cache-hit-count", EventFilter::QUERY_CACHE_HIT_COUNTS),
145    ("query-blocked", EventFilter::QUERY_BLOCKED),
146    ("incr-cache-load", EventFilter::INCR_CACHE_LOADS),
147    ("query-keys", EventFilter::QUERY_KEYS),
148    ("function-args", EventFilter::FUNCTION_ARGS),
149    ("args", EventFilter::ARGS),
150    ("llvm", EventFilter::LLVM),
151    ("incr-result-hashing", EventFilter::INCR_RESULT_HASHING),
152    ("artifact-sizes", EventFilter::ARTIFACT_SIZES),
153];
154
155/// Something that uniquely identifies a query invocation.
156pub struct QueryInvocationId(pub u32);
157
158/// Which format to use for `-Z time-passes`
159#[derive(#[automatically_derived]
impl ::core::clone::Clone for TimePassesFormat {
    #[inline]
    fn clone(&self) -> TimePassesFormat { *self }
}Clone, #[automatically_derived]
impl ::core::marker::Copy for TimePassesFormat { }Copy, #[automatically_derived]
impl ::core::cmp::PartialEq for TimePassesFormat {
    #[inline]
    fn eq(&self, other: &TimePassesFormat) -> bool {
        let __self_discr = ::core::intrinsics::discriminant_value(self);
        let __arg1_discr = ::core::intrinsics::discriminant_value(other);
        __self_discr == __arg1_discr
    }
}PartialEq, #[automatically_derived]
impl ::core::hash::Hash for TimePassesFormat {
    #[inline]
    fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) {
        let __self_discr = ::core::intrinsics::discriminant_value(self);
        ::core::hash::Hash::hash(&__self_discr, state)
    }
}Hash, #[automatically_derived]
impl ::core::fmt::Debug for TimePassesFormat {
    #[inline]
    fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
        ::core::fmt::Formatter::write_str(f,
            match self {
                TimePassesFormat::Text => "Text",
                TimePassesFormat::Json => "Json",
            })
    }
}Debug)]
160pub enum TimePassesFormat {
161    /// Emit human readable text
162    Text,
163    /// Emit structured JSON
164    Json,
165}
166
167/// A reference to the SelfProfiler. It can be cloned and sent across thread
168/// boundaries at will.
169#[derive(#[automatically_derived]
impl ::core::clone::Clone for SelfProfilerRef {
    #[inline]
    fn clone(&self) -> SelfProfilerRef {
        SelfProfilerRef {
            profiler: ::core::clone::Clone::clone(&self.profiler),
            event_filter_mask: ::core::clone::Clone::clone(&self.event_filter_mask),
            print_verbose_generic_activities: ::core::clone::Clone::clone(&self.print_verbose_generic_activities),
        }
    }
}Clone)]
170pub struct SelfProfilerRef {
171    // This field is `None` if self-profiling is disabled for the current
172    // compilation session.
173    profiler: Option<Arc<SelfProfiler>>,
174
175    // We store the filter mask directly in the reference because that doesn't
176    // cost anything and allows for filtering with checking if the profiler is
177    // actually enabled.
178    event_filter_mask: EventFilter,
179
180    // Print verbose generic activities to stderr.
181    print_verbose_generic_activities: Option<TimePassesFormat>,
182}
183
184impl SelfProfilerRef {
185    pub fn new(
186        profiler: Option<Arc<SelfProfiler>>,
187        print_verbose_generic_activities: Option<TimePassesFormat>,
188    ) -> SelfProfilerRef {
189        // If there is no SelfProfiler then the filter mask is set to NONE,
190        // ensuring that nothing ever tries to actually access it.
191        let event_filter_mask =
192            profiler.as_ref().map_or(EventFilter::empty(), |p| p.event_filter_mask);
193
194        SelfProfilerRef { profiler, event_filter_mask, print_verbose_generic_activities }
195    }
196
197    /// This shim makes sure that calls only get executed if the filter mask
198    /// lets them pass. It also contains some trickery to make sure that
199    /// code is optimized for non-profiling compilation sessions, i.e. anything
200    /// past the filter check is never inlined so it doesn't clutter the fast
201    /// path.
202    #[inline(always)]
203    fn exec<F>(&self, event_filter: EventFilter, f: F) -> TimingGuard<'_>
204    where
205        F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
206    {
207        #[inline(never)]
208        #[cold]
209        fn cold_call<F>(profiler_ref: &SelfProfilerRef, f: F) -> TimingGuard<'_>
210        where
211            F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
212        {
213            let profiler = profiler_ref.profiler.as_ref().unwrap();
214            f(profiler)
215        }
216
217        if self.event_filter_mask.contains(event_filter) {
218            cold_call(self, f)
219        } else {
220            TimingGuard::none()
221        }
222    }
223
224    /// Start profiling a verbose generic activity. Profiling continues until the
225    /// VerboseTimingGuard returned from this call is dropped. In addition to recording
226    /// a measureme event, "verbose" generic activities also print a timing entry to
227    /// stderr if the compiler is invoked with -Ztime-passes.
228    pub fn verbose_generic_activity(&self, event_label: &'static str) -> VerboseTimingGuard<'_> {
229        let message_and_format =
230            self.print_verbose_generic_activities.map(|format| (event_label.to_owned(), format));
231
232        VerboseTimingGuard::start(message_and_format, self.generic_activity(event_label))
233    }
234
235    /// Like `verbose_generic_activity`, but with an extra arg.
236    pub fn verbose_generic_activity_with_arg<A>(
237        &self,
238        event_label: &'static str,
239        event_arg: A,
240    ) -> VerboseTimingGuard<'_>
241    where
242        A: Borrow<str> + Into<String>,
243    {
244        let message_and_format = self
245            .print_verbose_generic_activities
246            .map(|format| (::alloc::__export::must_use({
        ::alloc::fmt::format(format_args!("{0}({1})", event_label,
                event_arg.borrow()))
    })format!("{}({})", event_label, event_arg.borrow()), format));
247
248        VerboseTimingGuard::start(
249            message_and_format,
250            self.generic_activity_with_arg(event_label, event_arg),
251        )
252    }
253
254    /// Start profiling a generic activity. Profiling continues until the
255    /// TimingGuard returned from this call is dropped.
256    #[inline(always)]
257    pub fn generic_activity(&self, event_label: &'static str) -> TimingGuard<'_> {
258        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
259            let event_label = profiler.get_or_alloc_cached_string(event_label);
260            let event_id = EventId::from_label(event_label);
261            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
262        })
263    }
264
265    /// Start profiling with some event filter for a given event. Profiling continues until the
266    /// TimingGuard returned from this call is dropped.
267    #[inline(always)]
268    pub fn generic_activity_with_event_id(&self, event_id: EventId) -> TimingGuard<'_> {
269        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
270            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
271        })
272    }
273
274    /// Start profiling a generic activity. Profiling continues until the
275    /// TimingGuard returned from this call is dropped.
276    #[inline(always)]
277    pub fn generic_activity_with_arg<A>(
278        &self,
279        event_label: &'static str,
280        event_arg: A,
281    ) -> TimingGuard<'_>
282    where
283        A: Borrow<str> + Into<String>,
284    {
285        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
286            let builder = EventIdBuilder::new(&profiler.profiler);
287            let event_label = profiler.get_or_alloc_cached_string(event_label);
288            let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
289                let event_arg = profiler.get_or_alloc_cached_string(event_arg);
290                builder.from_label_and_arg(event_label, event_arg)
291            } else {
292                builder.from_label(event_label)
293            };
294            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
295        })
296    }
297
298    /// Start profiling a generic activity, allowing costly arguments to be recorded. Profiling
299    /// continues until the `TimingGuard` returned from this call is dropped.
300    ///
301    /// If the arguments to a generic activity are cheap to create, use `generic_activity_with_arg`
302    /// or `generic_activity_with_args` for their simpler API. However, if they are costly or
303    /// require allocation in sufficiently hot contexts, then this allows for a closure to be called
304    /// only when arguments were asked to be recorded via `-Z self-profile-events=args`.
305    ///
306    /// In this case, the closure will be passed a `&mut EventArgRecorder`, to help with recording
307    /// one or many arguments within the generic activity being profiled, by calling its
308    /// `record_arg` method for example.
309    ///
310    /// This `EventArgRecorder` may implement more specific traits from other rustc crates, e.g. for
311    /// richer handling of rustc-specific argument types, while keeping this single entry-point API
312    /// for recording arguments.
313    ///
314    /// Note: recording at least one argument is *required* for the self-profiler to create the
315    /// `TimingGuard`. A panic will be triggered if that doesn't happen. This function exists
316    /// explicitly to record arguments, so it fails loudly when there are none to record.
317    ///
318    #[inline(always)]
319    pub fn generic_activity_with_arg_recorder<F>(
320        &self,
321        event_label: &'static str,
322        mut f: F,
323    ) -> TimingGuard<'_>
324    where
325        F: FnMut(&mut EventArgRecorder<'_>),
326    {
327        // Ensure this event will only be recorded when self-profiling is turned on.
328        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
329            let builder = EventIdBuilder::new(&profiler.profiler);
330            let event_label = profiler.get_or_alloc_cached_string(event_label);
331
332            // Ensure the closure to create event arguments will only be called when argument
333            // recording is turned on.
334            let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
335                // Set up the builder and call the user-provided closure to record potentially
336                // costly event arguments.
337                let mut recorder = EventArgRecorder { profiler, args: SmallVec::new() };
338                f(&mut recorder);
339
340                // It is expected that the closure will record at least one argument. If that
341                // doesn't happen, it's a bug: we've been explicitly called in order to record
342                // arguments, so we fail loudly when there are none to record.
343                if recorder.args.is_empty() {
344                    {
    ::core::panicking::panic_fmt(format_args!("The closure passed to `generic_activity_with_arg_recorder` needs to record at least one argument"));
};panic!(
345                        "The closure passed to `generic_activity_with_arg_recorder` needs to \
346                         record at least one argument"
347                    );
348                }
349
350                builder.from_label_and_args(event_label, &recorder.args)
351            } else {
352                builder.from_label(event_label)
353            };
354            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
355        })
356    }
357
358    /// Record the size of an artifact that the compiler produces
359    ///
360    /// `artifact_kind` is the class of artifact (e.g., query_cache, object_file, etc.)
361    /// `artifact_name` is an identifier to the specific artifact being stored (usually a filename)
362    #[inline(always)]
363    pub fn artifact_size<A>(&self, artifact_kind: &str, artifact_name: A, size: u64)
364    where
365        A: Borrow<str> + Into<String>,
366    {
367        drop(self.exec(EventFilter::ARTIFACT_SIZES, |profiler| {
368            let builder = EventIdBuilder::new(&profiler.profiler);
369            let event_label = profiler.get_or_alloc_cached_string(artifact_kind);
370            let event_arg = profiler.get_or_alloc_cached_string(artifact_name);
371            let event_id = builder.from_label_and_arg(event_label, event_arg);
372            let thread_id = get_thread_id();
373
374            profiler.profiler.record_integer_event(
375                profiler.artifact_size_event_kind,
376                event_id,
377                thread_id,
378                size,
379            );
380
381            TimingGuard::none()
382        }))
383    }
384
385    #[inline(always)]
386    pub fn generic_activity_with_args(
387        &self,
388        event_label: &'static str,
389        event_args: &[String],
390    ) -> TimingGuard<'_> {
391        self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
392            let builder = EventIdBuilder::new(&profiler.profiler);
393            let event_label = profiler.get_or_alloc_cached_string(event_label);
394            let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
395                let event_args: Vec<_> = event_args
396                    .iter()
397                    .map(|s| profiler.get_or_alloc_cached_string(&s[..]))
398                    .collect();
399                builder.from_label_and_args(event_label, &event_args)
400            } else {
401                builder.from_label(event_label)
402            };
403            TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
404        })
405    }
406
407    /// Start profiling a query provider. Profiling continues until the
408    /// TimingGuard returned from this call is dropped.
409    #[inline(always)]
410    pub fn query_provider(&self) -> TimingGuard<'_> {
411        self.exec(EventFilter::QUERY_PROVIDERS, |profiler| {
412            TimingGuard::start(profiler, profiler.query_event_kind, EventId::INVALID)
413        })
414    }
415
416    /// Record a query in-memory cache hit.
417    #[inline(always)]
418    pub fn query_cache_hit(&self, query_invocation_id: QueryInvocationId) {
419        #[inline(never)]
420        #[cold]
421        fn cold_call(profiler_ref: &SelfProfilerRef, query_invocation_id: QueryInvocationId) {
422            if profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
423                profiler_ref
424                    .profiler
425                    .as_ref()
426                    .unwrap()
427                    .increment_query_cache_hit_counters(QueryInvocationId(query_invocation_id.0));
428            }
429            if profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HITS) {
430                hint::cold_path();
431                profiler_ref.instant_query_event(
432                    |profiler| profiler.query_cache_hit_event_kind,
433                    query_invocation_id,
434                );
435            }
436        }
437
438        // We check both kinds of query cache hit events at once, to reduce overhead in the
439        // common case (with self-profile disabled).
440        if self.event_filter_mask.intersects(EventFilter::QUERY_CACHE_HIT_COMBINED) {
441            hint::cold_path();
442            cold_call(self, query_invocation_id);
443        }
444    }
445
446    /// Start profiling a query being blocked on a concurrent execution.
447    /// Profiling continues until the TimingGuard returned from this call is
448    /// dropped.
449    #[inline(always)]
450    pub fn query_blocked(&self) -> TimingGuard<'_> {
451        self.exec(EventFilter::QUERY_BLOCKED, |profiler| {
452            TimingGuard::start(profiler, profiler.query_blocked_event_kind, EventId::INVALID)
453        })
454    }
455
456    /// Start profiling how long it takes to load a query result from the
457    /// incremental compilation on-disk cache. Profiling continues until the
458    /// TimingGuard returned from this call is dropped.
459    #[inline(always)]
460    pub fn incr_cache_loading(&self) -> TimingGuard<'_> {
461        self.exec(EventFilter::INCR_CACHE_LOADS, |profiler| {
462            TimingGuard::start(
463                profiler,
464                profiler.incremental_load_result_event_kind,
465                EventId::INVALID,
466            )
467        })
468    }
469
470    /// Start profiling how long it takes to hash query results for incremental compilation.
471    /// Profiling continues until the TimingGuard returned from this call is dropped.
472    #[inline(always)]
473    pub fn incr_result_hashing(&self) -> TimingGuard<'_> {
474        self.exec(EventFilter::INCR_RESULT_HASHING, |profiler| {
475            TimingGuard::start(
476                profiler,
477                profiler.incremental_result_hashing_event_kind,
478                EventId::INVALID,
479            )
480        })
481    }
482
483    #[inline(always)]
484    fn instant_query_event(
485        &self,
486        event_kind: fn(&SelfProfiler) -> StringId,
487        query_invocation_id: QueryInvocationId,
488    ) {
489        let event_id = StringId::new_virtual(query_invocation_id.0);
490        let thread_id = get_thread_id();
491        let profiler = self.profiler.as_ref().unwrap();
492        profiler.profiler.record_instant_event(
493            event_kind(profiler),
494            EventId::from_virtual(event_id),
495            thread_id,
496        );
497    }
498
499    pub fn with_profiler(&self, f: impl FnOnce(&SelfProfiler)) {
500        if let Some(profiler) = &self.profiler {
501            f(profiler)
502        }
503    }
504
505    /// Gets a `StringId` for the given string. This method makes sure that
506    /// any strings going through it will only be allocated once in the
507    /// profiling data.
508    /// Returns `None` if the self-profiling is not enabled.
509    pub fn get_or_alloc_cached_string(&self, s: &str) -> Option<StringId> {
510        self.profiler.as_ref().map(|p| p.get_or_alloc_cached_string(s))
511    }
512
513    /// Store query cache hits to the self-profile log.
514    /// Should be called once at the end of the compilation session.
515    ///
516    /// The cache hits are stored per **query invocation**, not **per query kind/type**.
517    /// `analyzeme` can later deduplicate individual query labels from the QueryInvocationId event
518    /// IDs.
519    pub fn store_query_cache_hits(&self) {
520        if self.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
521            let profiler = self.profiler.as_ref().unwrap();
522            let query_hits = profiler.query_hits.read();
523            let builder = EventIdBuilder::new(&profiler.profiler);
524            let thread_id = get_thread_id();
525            for (query_invocation, hit_count) in query_hits.iter().enumerate() {
526                let hit_count = hit_count.load(Ordering::Relaxed);
527                // No need to record empty cache hit counts
528                if hit_count > 0 {
529                    let event_id =
530                        builder.from_label(StringId::new_virtual(query_invocation as u64));
531                    profiler.profiler.record_integer_event(
532                        profiler.query_cache_hit_count_event_kind,
533                        event_id,
534                        thread_id,
535                        hit_count,
536                    );
537                }
538            }
539        }
540    }
541
542    #[inline]
543    pub fn enabled(&self) -> bool {
544        self.profiler.is_some()
545    }
546
547    #[inline]
548    pub fn llvm_recording_enabled(&self) -> bool {
549        self.event_filter_mask.contains(EventFilter::LLVM)
550    }
551    #[inline]
552    pub fn get_self_profiler(&self) -> Option<Arc<SelfProfiler>> {
553        self.profiler.clone()
554    }
555
556    /// Is expensive recording of query keys and/or function arguments enabled?
557    pub fn is_args_recording_enabled(&self) -> bool {
558        self.enabled() && self.event_filter_mask.intersects(EventFilter::ARGS)
559    }
560}
561
562/// A helper for recording costly arguments to self-profiling events. Used with
563/// `SelfProfilerRef::generic_activity_with_arg_recorder`.
564pub struct EventArgRecorder<'p> {
565    /// The `SelfProfiler` used to intern the event arguments that users will ask to record.
566    profiler: &'p SelfProfiler,
567
568    /// The interned event arguments to be recorded in the generic activity event.
569    ///
570    /// The most common case, when actually recording event arguments, is to have one argument. Then
571    /// followed by recording two, in a couple places.
572    args: SmallVec<[StringId; 2]>,
573}
574
575impl EventArgRecorder<'_> {
576    /// Records a single argument within the current generic activity being profiled.
577    ///
578    /// Note: when self-profiling with costly event arguments, at least one argument
579    /// needs to be recorded. A panic will be triggered if that doesn't happen.
580    pub fn record_arg<A>(&mut self, event_arg: A)
581    where
582        A: Borrow<str> + Into<String>,
583    {
584        let event_arg = self.profiler.get_or_alloc_cached_string(event_arg);
585        self.args.push(event_arg);
586    }
587}
588
589pub struct SelfProfiler {
590    profiler: Profiler,
591    event_filter_mask: EventFilter,
592
593    string_cache: RwLock<FxHashMap<String, StringId>>,
594
595    /// Recording individual query cache hits as "instant" measureme events
596    /// is incredibly expensive. Instead of doing that, we simply aggregate
597    /// cache hit *counts* per query invocation, and then store the final count
598    /// of cache hits per invocation at the end of the compilation session.
599    ///
600    /// With this approach, we don't know the individual thread IDs and timestamps
601    /// of cache hits, but it has very little overhead on top of `-Zself-profile`.
602    /// Recording the cache hits as individual events made compilation 3-5x slower.
603    ///
604    /// Query invocation IDs should be monotonic integers, so we can store them in a vec,
605    /// rather than using a hashmap.
606    query_hits: RwLock<Vec<AtomicU64>>,
607
608    query_event_kind: StringId,
609    generic_activity_event_kind: StringId,
610    incremental_load_result_event_kind: StringId,
611    incremental_result_hashing_event_kind: StringId,
612    query_blocked_event_kind: StringId,
613    query_cache_hit_event_kind: StringId,
614    artifact_size_event_kind: StringId,
615    /// Total cache hits per query invocation
616    query_cache_hit_count_event_kind: StringId,
617}
618
619impl SelfProfiler {
620    pub fn new(
621        output_directory: &Path,
622        crate_name: Option<&str>,
623        event_filters: Option<&[String]>,
624        counter_name: &str,
625    ) -> Result<SelfProfiler, Box<dyn Error + Send + Sync>> {
626        fs::create_dir_all(output_directory)?;
627
628        let crate_name = crate_name.unwrap_or("unknown-crate");
629        // HACK(eddyb) we need to pad the PID, strange as it may seem, as its
630        // length can behave as a source of entropy for heap addresses, when
631        // ASLR is disabled and the heap is otherwise deterministic.
632        let pid: u32 = process::id();
633        let filename = ::alloc::__export::must_use({
        ::alloc::fmt::format(format_args!("{0}-{1:07}.rustc_profile",
                crate_name, pid))
    })format!("{crate_name}-{pid:07}.rustc_profile");
634        let path = output_directory.join(filename);
635        let profiler =
636            Profiler::with_counter(&path, measureme::counters::Counter::by_name(counter_name)?)?;
637
638        let query_event_kind = profiler.alloc_string("Query");
639        let generic_activity_event_kind = profiler.alloc_string("GenericActivity");
640        let incremental_load_result_event_kind = profiler.alloc_string("IncrementalLoadResult");
641        let incremental_result_hashing_event_kind =
642            profiler.alloc_string("IncrementalResultHashing");
643        let query_blocked_event_kind = profiler.alloc_string("QueryBlocked");
644        let query_cache_hit_event_kind = profiler.alloc_string("QueryCacheHit");
645        let artifact_size_event_kind = profiler.alloc_string("ArtifactSize");
646        let query_cache_hit_count_event_kind = profiler.alloc_string("QueryCacheHitCount");
647
648        let mut event_filter_mask = EventFilter::empty();
649
650        if let Some(event_filters) = event_filters {
651            let mut unknown_events = ::alloc::vec::Vec::new()vec![];
652            for item in event_filters {
653                if let Some(&(_, mask)) =
654                    EVENT_FILTERS_BY_NAME.iter().find(|&(name, _)| name == item)
655                {
656                    event_filter_mask |= mask;
657                } else {
658                    unknown_events.push(item.clone());
659                }
660            }
661
662            // Warn about any unknown event names
663            if !unknown_events.is_empty() {
664                unknown_events.sort();
665                unknown_events.dedup();
666
667                {
    use ::tracing::__macro_support::Callsite as _;
    static __CALLSITE: ::tracing::callsite::DefaultCallsite =
        {
            static META: ::tracing::Metadata<'static> =
                {
                    ::tracing_core::metadata::Metadata::new("event compiler/rustc_data_structures/src/profiling.rs:667",
                        "rustc_data_structures::profiling", ::tracing::Level::WARN,
                        ::tracing_core::__macro_support::Option::Some("compiler/rustc_data_structures/src/profiling.rs"),
                        ::tracing_core::__macro_support::Option::Some(667u32),
                        ::tracing_core::__macro_support::Option::Some("rustc_data_structures::profiling"),
                        ::tracing_core::field::FieldSet::new(&["message"],
                            ::tracing_core::callsite::Identifier(&__CALLSITE)),
                        ::tracing::metadata::Kind::EVENT)
                };
            ::tracing::callsite::DefaultCallsite::new(&META)
        };
    let enabled =
        ::tracing::Level::WARN <= ::tracing::level_filters::STATIC_MAX_LEVEL
                &&
                ::tracing::Level::WARN <=
                    ::tracing::level_filters::LevelFilter::current() &&
            {
                let interest = __CALLSITE.interest();
                !interest.is_never() &&
                    ::tracing::__macro_support::__is_enabled(__CALLSITE.metadata(),
                        interest)
            };
    if enabled {
        (|value_set: ::tracing::field::ValueSet|
                    {
                        let meta = __CALLSITE.metadata();
                        ::tracing::Event::dispatch(meta, &value_set);
                        ;
                    })({
                #[allow(unused_imports)]
                use ::tracing::field::{debug, display, Value};
                let mut iter = __CALLSITE.metadata().fields().iter();
                __CALLSITE.metadata().fields().value_set(&[(&::tracing::__macro_support::Iterator::next(&mut iter).expect("FieldSet corrupted (this is a bug)"),
                                    ::tracing::__macro_support::Option::Some(&format_args!("Unknown self-profiler events specified: {0}. Available options are: {1}.",
                                                    unknown_events.join(", "),
                                                    EVENT_FILTERS_BY_NAME.iter().map(|&(name, _)|
                                                                    name.to_string()).collect::<Vec<_>>().join(", ")) as
                                            &dyn Value))])
            });
    } else { ; }
};warn!(
668                    "Unknown self-profiler events specified: {}. Available options are: {}.",
669                    unknown_events.join(", "),
670                    EVENT_FILTERS_BY_NAME
671                        .iter()
672                        .map(|&(name, _)| name.to_string())
673                        .collect::<Vec<_>>()
674                        .join(", ")
675                );
676            }
677        } else {
678            event_filter_mask = EventFilter::DEFAULT;
679        }
680
681        Ok(SelfProfiler {
682            profiler,
683            event_filter_mask,
684            string_cache: RwLock::new(FxHashMap::default()),
685            query_event_kind,
686            generic_activity_event_kind,
687            incremental_load_result_event_kind,
688            incremental_result_hashing_event_kind,
689            query_blocked_event_kind,
690            query_cache_hit_event_kind,
691            artifact_size_event_kind,
692            query_cache_hit_count_event_kind,
693            query_hits: Default::default(),
694        })
695    }
696
697    /// Allocates a new string in the profiling data. Does not do any caching
698    /// or deduplication.
699    pub fn alloc_string<STR: SerializableString + ?Sized>(&self, s: &STR) -> StringId {
700        self.profiler.alloc_string(s)
701    }
702
703    /// Store a cache hit of a query invocation
704    pub fn increment_query_cache_hit_counters(&self, id: QueryInvocationId) {
705        // Fast path: assume that the query was already encountered before, and just record
706        // a cache hit.
707        let mut guard = self.query_hits.upgradable_read();
708        let query_hits = &guard;
709        let index = id.0 as usize;
710        if index < query_hits.len() {
711            // We only want to increment the count, no other synchronization is required
712            query_hits[index].fetch_add(1, Ordering::Relaxed);
713        } else {
714            // If not, we need to extend the query hit map to the highest observed ID
715            guard.with_upgraded(|vec| {
716                vec.resize_with(index + 1, || AtomicU64::new(0));
717                vec[index] = AtomicU64::from(1);
718            });
719        }
720    }
721
722    /// Gets a `StringId` for the given string. This method makes sure that
723    /// any strings going through it will only be allocated once in the
724    /// profiling data.
725    pub fn get_or_alloc_cached_string<A>(&self, s: A) -> StringId
726    where
727        A: Borrow<str> + Into<String>,
728    {
729        // Only acquire a read-lock first since we assume that the string is
730        // already present in the common case.
731        {
732            let string_cache = self.string_cache.read();
733
734            if let Some(&id) = string_cache.get(s.borrow()) {
735                return id;
736            }
737        }
738
739        let mut string_cache = self.string_cache.write();
740        // Check if the string has already been added in the small time window
741        // between dropping the read lock and acquiring the write lock.
742        match string_cache.entry(s.into()) {
743            Entry::Occupied(e) => *e.get(),
744            Entry::Vacant(e) => {
745                let string_id = self.profiler.alloc_string(&e.key()[..]);
746                *e.insert(string_id)
747            }
748        }
749    }
750
751    pub fn map_query_invocation_id_to_string(&self, from: QueryInvocationId, to: StringId) {
752        let from = StringId::new_virtual(from.0);
753        self.profiler.map_virtual_to_concrete_string(from, to);
754    }
755
756    pub fn bulk_map_query_invocation_id_to_single_string<I>(&self, from: I, to: StringId)
757    where
758        I: Iterator<Item = QueryInvocationId> + ExactSizeIterator,
759    {
760        let from = from.map(|qid| StringId::new_virtual(qid.0));
761        self.profiler.bulk_map_virtual_to_single_concrete_string(from, to);
762    }
763
764    pub fn query_key_recording_enabled(&self) -> bool {
765        self.event_filter_mask.contains(EventFilter::QUERY_KEYS)
766    }
767
768    pub fn event_id_builder(&self) -> EventIdBuilder<'_> {
769        EventIdBuilder::new(&self.profiler)
770    }
771}
772
773#[must_use]
774pub struct TimingGuard<'a>(Option<measureme::TimingGuard<'a>>);
775
776impl<'a> TimingGuard<'a> {
777    #[inline]
778    pub fn start(
779        profiler: &'a SelfProfiler,
780        event_kind: StringId,
781        event_id: EventId,
782    ) -> TimingGuard<'a> {
783        let thread_id = get_thread_id();
784        let raw_profiler = &profiler.profiler;
785        let timing_guard =
786            raw_profiler.start_recording_interval_event(event_kind, event_id, thread_id);
787        TimingGuard(Some(timing_guard))
788    }
789
790    #[inline]
791    pub fn finish_with_query_invocation_id(self, query_invocation_id: QueryInvocationId) {
792        if let Some(guard) = self.0 {
793            outline(|| {
794                let event_id = StringId::new_virtual(query_invocation_id.0);
795                let event_id = EventId::from_virtual(event_id);
796                guard.finish_with_override_event_id(event_id);
797            });
798        }
799    }
800
801    #[inline]
802    pub fn none() -> TimingGuard<'a> {
803        TimingGuard(None)
804    }
805
806    #[inline(always)]
807    pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
808        let _timer = self;
809        f()
810    }
811}
812
813struct VerboseInfo {
814    start_time: Instant,
815    start_rss: Option<usize>,
816    message: String,
817    format: TimePassesFormat,
818}
819
820#[must_use]
821pub struct VerboseTimingGuard<'a> {
822    info: Option<VerboseInfo>,
823    _guard: TimingGuard<'a>,
824}
825
826impl<'a> VerboseTimingGuard<'a> {
827    pub fn start(
828        message_and_format: Option<(String, TimePassesFormat)>,
829        _guard: TimingGuard<'a>,
830    ) -> Self {
831        VerboseTimingGuard {
832            _guard,
833            info: message_and_format.map(|(message, format)| VerboseInfo {
834                start_time: Instant::now(),
835                start_rss: get_resident_set_size(),
836                message,
837                format,
838            }),
839        }
840    }
841
842    #[inline(always)]
843    pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
844        let _timer = self;
845        f()
846    }
847}
848
849impl Drop for VerboseTimingGuard<'_> {
850    fn drop(&mut self) {
851        if let Some(info) = &self.info {
852            let end_rss = get_resident_set_size();
853            let dur = info.start_time.elapsed();
854            print_time_passes_entry(&info.message, dur, info.start_rss, end_rss, info.format);
855        }
856    }
857}
858
859struct JsonTimePassesEntry<'a> {
860    pass: &'a str,
861    time: f64,
862    start_rss: Option<usize>,
863    end_rss: Option<usize>,
864}
865
866impl Display for JsonTimePassesEntry<'_> {
867    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
868        let Self { pass: what, time, start_rss, end_rss } = self;
869        f.write_fmt(format_args!("{{\"pass\":\"{0}\",\"time\":{1},\"rss_start\":",
        what, time))write!(f, r#"{{"pass":"{what}","time":{time},"rss_start":"#).unwrap();
870        match start_rss {
871            Some(rss) => f.write_fmt(format_args!("{0}", rss))write!(f, "{rss}")?,
872            None => f.write_fmt(format_args!("null"))write!(f, "null")?,
873        }
874        f.write_fmt(format_args!(",\"rss_end\":"))write!(f, r#","rss_end":"#)?;
875        match end_rss {
876            Some(rss) => f.write_fmt(format_args!("{0}", rss))write!(f, "{rss}")?,
877            None => f.write_fmt(format_args!("null"))write!(f, "null")?,
878        }
879        f.write_fmt(format_args!("}}"))write!(f, "}}")?;
880        Ok(())
881    }
882}
883
884pub fn print_time_passes_entry(
885    what: &str,
886    dur: Duration,
887    start_rss: Option<usize>,
888    end_rss: Option<usize>,
889    format: TimePassesFormat,
890) {
891    match format {
892        TimePassesFormat::Json => {
893            let entry =
894                JsonTimePassesEntry { pass: what, time: dur.as_secs_f64(), start_rss, end_rss };
895
896            { ::std::io::_eprint(format_args!("time: {0}\n", entry)); };eprintln!(r#"time: {entry}"#);
897            return;
898        }
899        TimePassesFormat::Text => (),
900    }
901
902    // Print the pass if its duration is greater than 5 ms, or it changed the
903    // measured RSS.
904    let is_notable = || {
905        if dur.as_millis() > 5 {
906            return true;
907        }
908
909        if let (Some(start_rss), Some(end_rss)) = (start_rss, end_rss) {
910            let change_rss = end_rss.abs_diff(start_rss);
911            if change_rss > 0 {
912                return true;
913            }
914        }
915
916        false
917    };
918    if !is_notable() {
919        return;
920    }
921
922    let rss_to_mb = |rss| (rss as f64 / 1_000_000.0).round() as usize;
923    let rss_change_to_mb = |rss| (rss as f64 / 1_000_000.0).round() as i128;
924
925    let mem_string = match (start_rss, end_rss) {
926        (Some(start_rss), Some(end_rss)) => {
927            let change_rss = end_rss as i128 - start_rss as i128;
928
929            ::alloc::__export::must_use({
        ::alloc::fmt::format(format_args!("; rss: {0:>4}MB -> {1:>4}MB ({2:>+5}MB)",
                rss_to_mb(start_rss), rss_to_mb(end_rss),
                rss_change_to_mb(change_rss)))
    })format!(
930                "; rss: {:>4}MB -> {:>4}MB ({:>+5}MB)",
931                rss_to_mb(start_rss),
932                rss_to_mb(end_rss),
933                rss_change_to_mb(change_rss),
934            )
935        }
936        (Some(start_rss), None) => ::alloc::__export::must_use({
        ::alloc::fmt::format(format_args!("; rss start: {0:>4}MB",
                rss_to_mb(start_rss)))
    })format!("; rss start: {:>4}MB", rss_to_mb(start_rss)),
937        (None, Some(end_rss)) => ::alloc::__export::must_use({
        ::alloc::fmt::format(format_args!("; rss end: {0:>4}MB",
                rss_to_mb(end_rss)))
    })format!("; rss end: {:>4}MB", rss_to_mb(end_rss)),
938        (None, None) => String::new(),
939    };
940
941    {
    ::std::io::_eprint(format_args!("time: {0:>7}{1}\t{2}\n",
            duration_to_secs_str(dur), mem_string, what));
};eprintln!("time: {:>7}{}\t{}", duration_to_secs_str(dur), mem_string, what);
942}
943
944// Hack up our own formatting for the duration to make it easier for scripts
945// to parse (always use the same number of decimal places and the same unit).
946pub fn duration_to_secs_str(dur: std::time::Duration) -> String {
947    ::alloc::__export::must_use({
        ::alloc::fmt::format(format_args!("{0:.3}", dur.as_secs_f64()))
    })format!("{:.3}", dur.as_secs_f64())
948}
949
950fn get_thread_id() -> u32 {
951    std::thread::current().id().as_u64().get() as u32
952}
953
954// Memory reporting
955cfg_select! {
956    windows => {
957        pub fn get_resident_set_size() -> Option<usize> {
958            use windows::{
959                Win32::System::ProcessStatus::{K32GetProcessMemoryInfo, PROCESS_MEMORY_COUNTERS},
960                Win32::System::Threading::GetCurrentProcess,
961            };
962
963            let mut pmc = PROCESS_MEMORY_COUNTERS::default();
964            let pmc_size = size_of_val(&pmc);
965            unsafe {
966                K32GetProcessMemoryInfo(
967                    GetCurrentProcess(),
968                    &mut pmc,
969                    pmc_size as u32,
970                )
971            }
972            .ok()
973            .ok()?;
974
975            Some(pmc.WorkingSetSize)
976        }
977    }
978    target_os = "macos" => {
979        pub fn get_resident_set_size() -> Option<usize> {
980            use libc::{c_int, c_void, getpid, proc_pidinfo, proc_taskinfo, PROC_PIDTASKINFO};
981            use std::mem;
982            const PROC_TASKINFO_SIZE: c_int = size_of::<proc_taskinfo>() as c_int;
983
984            unsafe {
985                let mut info: proc_taskinfo = mem::zeroed();
986                let info_ptr = &mut info as *mut proc_taskinfo as *mut c_void;
987                let pid = getpid() as c_int;
988                let ret = proc_pidinfo(pid, PROC_PIDTASKINFO, 0, info_ptr, PROC_TASKINFO_SIZE);
989                if ret == PROC_TASKINFO_SIZE {
990                    Some(info.pti_resident_size as usize)
991                } else {
992                    None
993                }
994            }
995        }
996    }
997    unix => {
998        pub fn get_resident_set_size() -> Option<usize> {
999            use libc::{sysconf, _SC_PAGESIZE};
1000            let field = 1;
1001            let contents = fs::read("/proc/self/statm").ok()?;
1002            let contents = String::from_utf8(contents).ok()?;
1003            let s = contents.split_whitespace().nth(field)?;
1004            let npages = s.parse::<usize>().ok()?;
1005            // SAFETY: `sysconf(_SC_PAGESIZE)` has no side effects and is safe to call.
1006            Some(npages * unsafe { sysconf(_SC_PAGESIZE) } as usize)
1007        }
1008    }
1009    _ => {
1010        pub fn get_resident_set_size() -> Option<usize> {
1011            None
1012        }
1013    }
1014}
1015
1016#[cfg(test)]
1017mod tests;