1//! # Rust Compiler Self-Profiling
2//!
3//! This module implements the basic framework for the compiler's self-
4//! profiling support. It provides the `SelfProfiler` type which enables
5//! recording "events". An event is something that starts and ends at a given
6//! point in time and has an ID and a kind attached to it. This allows for
7//! tracing the compiler's activity.
8//!
9//! Internally this module uses the custom tailored [measureme][mm] crate for
10//! efficiently recording events to disk in a compact format that can be
11//! post-processed and analyzed by the suite of tools in the `measureme`
12//! project. The highest priority for the tracing framework is on incurring as
13//! little overhead as possible.
14//!
15//!
16//! ## Event Overview
17//!
18//! Events have a few properties:
19//!
20//! - The `event_kind` designates the broad category of an event (e.g. does it
21//! correspond to the execution of a query provider or to loading something
22//! from the incr. comp. on-disk cache, etc).
23//! - The `event_id` designates the query invocation or function call it
24//! corresponds to, possibly including the query key or function arguments.
25//! - Each event stores the ID of the thread it was recorded on.
26//! - The timestamp stores beginning and end of the event, or the single point
27//! in time it occurred at for "instant" events.
28//!
29//!
30//! ## Event Filtering
31//!
32//! Event generation can be filtered by event kind. Recording all possible
33//! events generates a lot of data, much of which is not needed for most kinds
34//! of analysis. So, in order to keep overhead as low as possible for a given
35//! use case, the `SelfProfiler` will only record the kinds of events that
36//! pass the filter specified as a command line argument to the compiler.
37//!
38//!
39//! ## `event_id` Assignment
40//!
41//! As far as `measureme` is concerned, `event_id`s are just strings. However,
42//! it would incur too much overhead to generate and persist each `event_id`
43//! string at the point where the event is recorded. In order to make this more
44//! efficient `measureme` has two features:
45//!
46//! - Strings can share their content, so that re-occurring parts don't have to
47//! be copied over and over again. One allocates a string in `measureme` and
48//! gets back a `StringId`. This `StringId` is then used to refer to that
49//! string. `measureme` strings are actually DAGs of string components so that
50//! arbitrary sharing of substrings can be done efficiently. This is useful
51//! because `event_id`s contain lots of redundant text like query names or
52//! def-path components.
53//!
54//! - `StringId`s can be "virtual" which means that the client picks a numeric
55//! ID according to some application-specific scheme and can later make that
56//! ID be mapped to an actual string. This is used to cheaply generate
57//! `event_id`s while the events actually occur, causing little timing
58//! distortion, and then later map those `StringId`s, in bulk, to actual
59//! `event_id` strings. This way the largest part of the tracing overhead is
60//! localized to one contiguous chunk of time.
61//!
62//! How are these `event_id`s generated in the compiler? For things that occur
63//! infrequently (e.g. "generic activities"), we just allocate the string the
64//! first time it is used and then keep the `StringId` in a hash table. This
65//! is implemented in `SelfProfiler::get_or_alloc_cached_string()`.
66//!
67//! For queries it gets more interesting: First we need a unique numeric ID for
68//! each query invocation (the `QueryInvocationId`). This ID is used as the
69//! virtual `StringId` we use as `event_id` for a given event. This ID has to
70//! be available both when the query is executed and later, together with the
71//! query key, when we allocate the actual `event_id` strings in bulk.
72//!
73//! We could make the compiler generate and keep track of such an ID for each
74//! query invocation but luckily we already have something that fits all the
75//! the requirements: the query's `DepNodeIndex`. So we use the numeric value
76//! of the `DepNodeIndex` as `event_id` when recording the event and then,
77//! just before the query context is dropped, we walk the entire query cache
78//! (which stores the `DepNodeIndex` along with the query key for each
79//! invocation) and allocate the corresponding strings together with a mapping
80//! for `DepNodeIndex as StringId`.
81//!
82//! [mm]: https://github.com/rust-lang/measureme/
8384use std::borrow::Borrow;
85use std::collections::hash_map::Entry;
86use std::error::Error;
87use std::fmt::Display;
88use std::path::Path;
89use std::sync::Arc;
90use std::sync::atomic::Ordering;
91use std::time::{Duration, Instant};
92use std::{fs, hint, process};
9394pub use measureme::EventId;
95use measureme::{EventIdBuilder, Profiler, SerializableString, StringId};
96use parking_lot::RwLock;
97use smallvec::SmallVec;
98use tracing::warn;
99100use crate::fx::FxHashMap;
101use crate::outline;
102use crate::sync::AtomicU64;
103104bitflags::bitflags! {
105#[derive(#[automatically_derived]
impl ::core::clone::Clone for EventFilter {
#[inline]
fn clone(&self) -> EventFilter {
let _:
::core::clone::AssertParamIsClone<<EventFilter as
::bitflags::__private::PublicFlags>::Internal>;
*self
}
}
impl EventFilter {
#[allow(deprecated, non_upper_case_globals,)]
pub const GENERIC_ACTIVITIES: Self = Self::from_bits_retain(1 << 0);
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_PROVIDERS: Self = Self::from_bits_retain(1 << 1);
#[doc =
r" Store detailed instant events, including timestamp and thread ID,"]
#[doc = r" per each query cache hit. Note that this is quite expensive."]
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_CACHE_HITS: Self = Self::from_bits_retain(1 << 2);
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_BLOCKED: Self = Self::from_bits_retain(1 << 3);
#[allow(deprecated, non_upper_case_globals,)]
pub const INCR_CACHE_LOADS: Self = Self::from_bits_retain(1 << 4);
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_KEYS: Self = Self::from_bits_retain(1 << 5);
#[allow(deprecated, non_upper_case_globals,)]
pub const FUNCTION_ARGS: Self = Self::from_bits_retain(1 << 6);
#[allow(deprecated, non_upper_case_globals,)]
pub const LLVM: Self = Self::from_bits_retain(1 << 7);
#[allow(deprecated, non_upper_case_globals,)]
pub const INCR_RESULT_HASHING: Self = Self::from_bits_retain(1 << 8);
#[allow(deprecated, non_upper_case_globals,)]
pub const ARTIFACT_SIZES: Self = Self::from_bits_retain(1 << 9);
#[doc = r" Store aggregated counts of cache hits per query invocation."]
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_CACHE_HIT_COUNTS: Self = Self::from_bits_retain(1 << 10);
#[allow(deprecated, non_upper_case_globals,)]
pub const DEFAULT: Self =
Self::from_bits_retain(Self::GENERIC_ACTIVITIES.bits() |
Self::QUERY_PROVIDERS.bits() | Self::QUERY_BLOCKED.bits() |
Self::INCR_CACHE_LOADS.bits() |
Self::INCR_RESULT_HASHING.bits() |
Self::ARTIFACT_SIZES.bits() |
Self::QUERY_CACHE_HIT_COUNTS.bits());
#[allow(deprecated, non_upper_case_globals,)]
pub const ARGS: Self =
Self::from_bits_retain(Self::QUERY_KEYS.bits() |
Self::FUNCTION_ARGS.bits());
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_CACHE_HIT_COMBINED: Self =
Self::from_bits_retain(Self::QUERY_CACHE_HITS.bits() |
Self::QUERY_CACHE_HIT_COUNTS.bits());
}
impl ::bitflags::Flags for EventFilter {
const FLAGS: &'static [::bitflags::Flag<EventFilter>] =
&[{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("GENERIC_ACTIVITIES",
EventFilter::GENERIC_ACTIVITIES)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_PROVIDERS",
EventFilter::QUERY_PROVIDERS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_CACHE_HITS",
EventFilter::QUERY_CACHE_HITS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_BLOCKED",
EventFilter::QUERY_BLOCKED)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("INCR_CACHE_LOADS",
EventFilter::INCR_CACHE_LOADS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_KEYS", EventFilter::QUERY_KEYS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("FUNCTION_ARGS",
EventFilter::FUNCTION_ARGS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("LLVM", EventFilter::LLVM)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("INCR_RESULT_HASHING",
EventFilter::INCR_RESULT_HASHING)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("ARTIFACT_SIZES",
EventFilter::ARTIFACT_SIZES)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_CACHE_HIT_COUNTS",
EventFilter::QUERY_CACHE_HIT_COUNTS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("DEFAULT", EventFilter::DEFAULT)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("ARGS", EventFilter::ARGS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_CACHE_HIT_COMBINED",
EventFilter::QUERY_CACHE_HIT_COMBINED)
}];
type Bits = u16;
fn bits(&self) -> u16 { EventFilter::bits(self) }
fn from_bits_retain(bits: u16) -> EventFilter {
EventFilter::from_bits_retain(bits)
}
}
#[allow(dead_code, deprecated, unused_doc_comments, unused_attributes,
unused_mut, unused_imports, non_upper_case_globals, clippy ::
assign_op_pattern, clippy :: indexing_slicing, clippy :: same_name_method,
clippy :: iter_without_into_iter,)]
const _: () =
{
#[repr(transparent)]
struct InternalBitFlags(u16);
#[automatically_derived]
#[doc(hidden)]
unsafe impl ::core::clone::TrivialClone for InternalBitFlags { }
#[automatically_derived]
impl ::core::clone::Clone for InternalBitFlags {
#[inline]
fn clone(&self) -> InternalBitFlags {
let _: ::core::clone::AssertParamIsClone<u16>;
*self
}
}
#[automatically_derived]
impl ::core::marker::Copy for InternalBitFlags { }
#[automatically_derived]
impl ::core::marker::StructuralPartialEq for InternalBitFlags { }
#[automatically_derived]
impl ::core::cmp::PartialEq for InternalBitFlags {
#[inline]
fn eq(&self, other: &InternalBitFlags) -> bool {
self.0 == other.0
}
}
#[automatically_derived]
impl ::core::cmp::Eq for InternalBitFlags {
#[inline]
#[doc(hidden)]
#[coverage(off)]
fn assert_receiver_is_total_eq(&self) {
let _: ::core::cmp::AssertParamIsEq<u16>;
}
}
#[automatically_derived]
impl ::core::cmp::PartialOrd for InternalBitFlags {
#[inline]
fn partial_cmp(&self, other: &InternalBitFlags)
-> ::core::option::Option<::core::cmp::Ordering> {
::core::cmp::PartialOrd::partial_cmp(&self.0, &other.0)
}
}
#[automatically_derived]
impl ::core::cmp::Ord for InternalBitFlags {
#[inline]
fn cmp(&self, other: &InternalBitFlags) -> ::core::cmp::Ordering {
::core::cmp::Ord::cmp(&self.0, &other.0)
}
}
#[automatically_derived]
impl ::core::hash::Hash for InternalBitFlags {
#[inline]
fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) {
::core::hash::Hash::hash(&self.0, state)
}
}
impl ::bitflags::__private::PublicFlags for EventFilter {
type Primitive = u16;
type Internal = InternalBitFlags;
}
impl ::bitflags::__private::core::default::Default for
InternalBitFlags {
#[inline]
fn default() -> Self { InternalBitFlags::empty() }
}
impl ::bitflags::__private::core::fmt::Debug for InternalBitFlags {
fn fmt(&self,
f: &mut ::bitflags::__private::core::fmt::Formatter<'_>)
-> ::bitflags::__private::core::fmt::Result {
if self.is_empty() {
f.write_fmt(format_args!("{0:#x}",
<u16 as ::bitflags::Bits>::EMPTY))
} else {
::bitflags::__private::core::fmt::Display::fmt(self, f)
}
}
}
impl ::bitflags::__private::core::fmt::Display for InternalBitFlags {
fn fmt(&self,
f: &mut ::bitflags::__private::core::fmt::Formatter<'_>)
-> ::bitflags::__private::core::fmt::Result {
::bitflags::parser::to_writer(&EventFilter(*self), f)
}
}
impl ::bitflags::__private::core::str::FromStr for InternalBitFlags {
type Err = ::bitflags::parser::ParseError;
fn from_str(s: &str)
->
::bitflags::__private::core::result::Result<Self,
Self::Err> {
::bitflags::parser::from_str::<EventFilter>(s).map(|flags|
flags.0)
}
}
impl ::bitflags::__private::core::convert::AsRef<u16> for
InternalBitFlags {
fn as_ref(&self) -> &u16 { &self.0 }
}
impl ::bitflags::__private::core::convert::From<u16> for
InternalBitFlags {
fn from(bits: u16) -> Self { Self::from_bits_retain(bits) }
}
#[allow(dead_code, deprecated, unused_attributes)]
impl InternalBitFlags {
/// Get a flags value with all bits unset.
#[inline]
pub const fn empty() -> Self {
Self(<u16 as ::bitflags::Bits>::EMPTY)
}
/// Get a flags value with all known bits set.
#[inline]
pub const fn all() -> Self {
let mut truncated = <u16 as ::bitflags::Bits>::EMPTY;
let mut i = 0;
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
let _ = i;
Self(truncated)
}
/// Get the underlying bits value.
///
/// The returned value is exactly the bits set in this flags value.
#[inline]
pub const fn bits(&self) -> u16 { self.0 }
/// Convert from a bits value.
///
/// This method will return `None` if any unknown bits are set.
#[inline]
pub const fn from_bits(bits: u16)
-> ::bitflags::__private::core::option::Option<Self> {
let truncated = Self::from_bits_truncate(bits).0;
if truncated == bits {
::bitflags::__private::core::option::Option::Some(Self(bits))
} else { ::bitflags::__private::core::option::Option::None }
}
/// Convert from a bits value, unsetting any unknown bits.
#[inline]
pub const fn from_bits_truncate(bits: u16) -> Self {
Self(bits & Self::all().0)
}
/// Convert from a bits value exactly.
#[inline]
pub const fn from_bits_retain(bits: u16) -> Self { Self(bits) }
/// Get a flags value with the bits of a flag with the given name set.
///
/// This method will return `None` if `name` is empty or doesn't
/// correspond to any named flag.
#[inline]
pub fn from_name(name: &str)
-> ::bitflags::__private::core::option::Option<Self> {
{
if name == "GENERIC_ACTIVITIES" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::GENERIC_ACTIVITIES.bits()));
}
};
;
{
if name == "QUERY_PROVIDERS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_PROVIDERS.bits()));
}
};
;
{
if name == "QUERY_CACHE_HITS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HITS.bits()));
}
};
;
{
if name == "QUERY_BLOCKED" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_BLOCKED.bits()));
}
};
;
{
if name == "INCR_CACHE_LOADS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::INCR_CACHE_LOADS.bits()));
}
};
;
{
if name == "QUERY_KEYS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_KEYS.bits()));
}
};
;
{
if name == "FUNCTION_ARGS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::FUNCTION_ARGS.bits()));
}
};
;
{
if name == "LLVM" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::LLVM.bits()));
}
};
;
{
if name == "INCR_RESULT_HASHING" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::INCR_RESULT_HASHING.bits()));
}
};
;
{
if name == "ARTIFACT_SIZES" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::ARTIFACT_SIZES.bits()));
}
};
;
{
if name == "QUERY_CACHE_HIT_COUNTS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HIT_COUNTS.bits()));
}
};
;
{
if name == "DEFAULT" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::DEFAULT.bits()));
}
};
;
{
if name == "ARGS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::ARGS.bits()));
}
};
;
{
if name == "QUERY_CACHE_HIT_COMBINED" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HIT_COMBINED.bits()));
}
};
;
let _ = name;
::bitflags::__private::core::option::Option::None
}
/// Whether all bits in this flags value are unset.
#[inline]
pub const fn is_empty(&self) -> bool {
self.0 == <u16 as ::bitflags::Bits>::EMPTY
}
/// Whether all known bits in this flags value are set.
#[inline]
pub const fn is_all(&self) -> bool {
Self::all().0 | self.0 == self.0
}
/// Whether any set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn intersects(&self, other: Self) -> bool {
self.0 & other.0 != <u16 as ::bitflags::Bits>::EMPTY
}
/// Whether all set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn contains(&self, other: Self) -> bool {
self.0 & other.0 == other.0
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
pub fn insert(&mut self, other: Self) {
*self = Self(self.0).union(other);
}
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `remove` won't truncate `other`, but the `!` operator will.
#[inline]
pub fn remove(&mut self, other: Self) {
*self = Self(self.0).difference(other);
}
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
pub fn toggle(&mut self, other: Self) {
*self = Self(self.0).symmetric_difference(other);
}
/// Call `insert` when `value` is `true` or `remove` when `value` is `false`.
#[inline]
pub fn set(&mut self, other: Self, value: bool) {
if value { self.insert(other); } else { self.remove(other); }
}
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn intersection(self, other: Self) -> Self {
Self(self.0 & other.0)
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn union(self, other: Self) -> Self {
Self(self.0 | other.0)
}
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
#[must_use]
pub const fn difference(self, other: Self) -> Self {
Self(self.0 & !other.0)
}
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn symmetric_difference(self, other: Self) -> Self {
Self(self.0 ^ other.0)
}
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
#[must_use]
pub const fn complement(self) -> Self {
Self::from_bits_truncate(!self.0)
}
}
impl ::bitflags::__private::core::fmt::Binary for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Binary::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::Octal for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Octal::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::LowerHex for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::LowerHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::UpperHex for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::UpperHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::ops::BitOr for InternalBitFlags {
type Output = Self;
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor(self, other: InternalBitFlags) -> Self {
self.union(other)
}
}
impl ::bitflags::__private::core::ops::BitOrAssign for
InternalBitFlags {
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor_assign(&mut self, other: Self) { self.insert(other); }
}
impl ::bitflags::__private::core::ops::BitXor for InternalBitFlags {
type Output = Self;
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor(self, other: Self) -> Self {
self.symmetric_difference(other)
}
}
impl ::bitflags::__private::core::ops::BitXorAssign for
InternalBitFlags {
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor_assign(&mut self, other: Self) { self.toggle(other); }
}
impl ::bitflags::__private::core::ops::BitAnd for InternalBitFlags {
type Output = Self;
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand(self, other: Self) -> Self { self.intersection(other) }
}
impl ::bitflags::__private::core::ops::BitAndAssign for
InternalBitFlags {
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand_assign(&mut self, other: Self) {
*self =
Self::from_bits_retain(self.bits()).intersection(other);
}
}
impl ::bitflags::__private::core::ops::Sub for InternalBitFlags {
type Output = Self;
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub(self, other: Self) -> Self { self.difference(other) }
}
impl ::bitflags::__private::core::ops::SubAssign for InternalBitFlags
{
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub_assign(&mut self, other: Self) { self.remove(other); }
}
impl ::bitflags::__private::core::ops::Not for InternalBitFlags {
type Output = Self;
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
fn not(self) -> Self { self.complement() }
}
impl ::bitflags::__private::core::iter::Extend<InternalBitFlags> for
InternalBitFlags {
/// The bitwise or (`|`) of the bits in each flags value.
fn extend<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(&mut self, iterator: T) {
for item in iterator { self.insert(item) }
}
}
impl ::bitflags::__private::core::iter::FromIterator<InternalBitFlags>
for InternalBitFlags {
/// The bitwise or (`|`) of the bits in each flags value.
fn from_iter<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(iterator: T) -> Self {
use ::bitflags::__private::core::iter::Extend;
let mut result = Self::empty();
result.extend(iterator);
result
}
}
impl InternalBitFlags {
/// Yield a set of contained flags values.
///
/// Each yielded flags value will correspond to a defined named flag. Any unknown bits
/// will be yielded together as a final flags value.
#[inline]
pub const fn iter(&self) -> ::bitflags::iter::Iter<EventFilter> {
::bitflags::iter::Iter::__private_const_new(<EventFilter as
::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
/// Yield a set of contained named flags values.
///
/// This method is like [`iter`](#method.iter), except only yields bits in contained named flags.
/// Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
#[inline]
pub const fn iter_names(&self)
-> ::bitflags::iter::IterNames<EventFilter> {
::bitflags::iter::IterNames::__private_const_new(<EventFilter
as ::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
}
impl ::bitflags::__private::core::iter::IntoIterator for
InternalBitFlags {
type Item = EventFilter;
type IntoIter = ::bitflags::iter::Iter<EventFilter>;
fn into_iter(self) -> Self::IntoIter { self.iter() }
}
impl InternalBitFlags {
/// Returns a mutable reference to the raw value of the flags currently stored.
#[inline]
pub fn bits_mut(&mut self) -> &mut u16 { &mut self.0 }
}
#[allow(dead_code, deprecated, unused_attributes)]
impl EventFilter {
/// Get a flags value with all bits unset.
#[inline]
pub const fn empty() -> Self { Self(InternalBitFlags::empty()) }
/// Get a flags value with all known bits set.
#[inline]
pub const fn all() -> Self { Self(InternalBitFlags::all()) }
/// Get the underlying bits value.
///
/// The returned value is exactly the bits set in this flags value.
#[inline]
pub const fn bits(&self) -> u16 { self.0.bits() }
/// Convert from a bits value.
///
/// This method will return `None` if any unknown bits are set.
#[inline]
pub const fn from_bits(bits: u16)
-> ::bitflags::__private::core::option::Option<Self> {
match InternalBitFlags::from_bits(bits) {
::bitflags::__private::core::option::Option::Some(bits) =>
::bitflags::__private::core::option::Option::Some(Self(bits)),
::bitflags::__private::core::option::Option::None =>
::bitflags::__private::core::option::Option::None,
}
}
/// Convert from a bits value, unsetting any unknown bits.
#[inline]
pub const fn from_bits_truncate(bits: u16) -> Self {
Self(InternalBitFlags::from_bits_truncate(bits))
}
/// Convert from a bits value exactly.
#[inline]
pub const fn from_bits_retain(bits: u16) -> Self {
Self(InternalBitFlags::from_bits_retain(bits))
}
/// Get a flags value with the bits of a flag with the given name set.
///
/// This method will return `None` if `name` is empty or doesn't
/// correspond to any named flag.
#[inline]
pub fn from_name(name: &str)
-> ::bitflags::__private::core::option::Option<Self> {
match InternalBitFlags::from_name(name) {
::bitflags::__private::core::option::Option::Some(bits) =>
::bitflags::__private::core::option::Option::Some(Self(bits)),
::bitflags::__private::core::option::Option::None =>
::bitflags::__private::core::option::Option::None,
}
}
/// Whether all bits in this flags value are unset.
#[inline]
pub const fn is_empty(&self) -> bool { self.0.is_empty() }
/// Whether all known bits in this flags value are set.
#[inline]
pub const fn is_all(&self) -> bool { self.0.is_all() }
/// Whether any set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn intersects(&self, other: Self) -> bool {
self.0.intersects(other.0)
}
/// Whether all set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn contains(&self, other: Self) -> bool {
self.0.contains(other.0)
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
pub fn insert(&mut self, other: Self) { self.0.insert(other.0) }
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `remove` won't truncate `other`, but the `!` operator will.
#[inline]
pub fn remove(&mut self, other: Self) { self.0.remove(other.0) }
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
pub fn toggle(&mut self, other: Self) { self.0.toggle(other.0) }
/// Call `insert` when `value` is `true` or `remove` when `value` is `false`.
#[inline]
pub fn set(&mut self, other: Self, value: bool) {
self.0.set(other.0, value)
}
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn intersection(self, other: Self) -> Self {
Self(self.0.intersection(other.0))
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn union(self, other: Self) -> Self {
Self(self.0.union(other.0))
}
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
#[must_use]
pub const fn difference(self, other: Self) -> Self {
Self(self.0.difference(other.0))
}
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn symmetric_difference(self, other: Self) -> Self {
Self(self.0.symmetric_difference(other.0))
}
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
#[must_use]
pub const fn complement(self) -> Self {
Self(self.0.complement())
}
}
impl ::bitflags::__private::core::fmt::Binary for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Binary::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::Octal for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Octal::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::LowerHex for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::LowerHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::UpperHex for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::UpperHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::ops::BitOr for EventFilter {
type Output = Self;
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor(self, other: EventFilter) -> Self { self.union(other) }
}
impl ::bitflags::__private::core::ops::BitOrAssign for EventFilter {
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor_assign(&mut self, other: Self) { self.insert(other); }
}
impl ::bitflags::__private::core::ops::BitXor for EventFilter {
type Output = Self;
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor(self, other: Self) -> Self {
self.symmetric_difference(other)
}
}
impl ::bitflags::__private::core::ops::BitXorAssign for EventFilter {
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor_assign(&mut self, other: Self) { self.toggle(other); }
}
impl ::bitflags::__private::core::ops::BitAnd for EventFilter {
type Output = Self;
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand(self, other: Self) -> Self { self.intersection(other) }
}
impl ::bitflags::__private::core::ops::BitAndAssign for EventFilter {
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand_assign(&mut self, other: Self) {
*self =
Self::from_bits_retain(self.bits()).intersection(other);
}
}
impl ::bitflags::__private::core::ops::Sub for EventFilter {
type Output = Self;
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub(self, other: Self) -> Self { self.difference(other) }
}
impl ::bitflags::__private::core::ops::SubAssign for EventFilter {
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub_assign(&mut self, other: Self) { self.remove(other); }
}
impl ::bitflags::__private::core::ops::Not for EventFilter {
type Output = Self;
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
fn not(self) -> Self { self.complement() }
}
impl ::bitflags::__private::core::iter::Extend<EventFilter> for
EventFilter {
/// The bitwise or (`|`) of the bits in each flags value.
fn extend<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(&mut self, iterator: T) {
for item in iterator { self.insert(item) }
}
}
impl ::bitflags::__private::core::iter::FromIterator<EventFilter> for
EventFilter {
/// The bitwise or (`|`) of the bits in each flags value.
fn from_iter<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(iterator: T) -> Self {
use ::bitflags::__private::core::iter::Extend;
let mut result = Self::empty();
result.extend(iterator);
result
}
}
impl EventFilter {
/// Yield a set of contained flags values.
///
/// Each yielded flags value will correspond to a defined named flag. Any unknown bits
/// will be yielded together as a final flags value.
#[inline]
pub const fn iter(&self) -> ::bitflags::iter::Iter<EventFilter> {
::bitflags::iter::Iter::__private_const_new(<EventFilter as
::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
/// Yield a set of contained named flags values.
///
/// This method is like [`iter`](#method.iter), except only yields bits in contained named flags.
/// Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
#[inline]
pub const fn iter_names(&self)
-> ::bitflags::iter::IterNames<EventFilter> {
::bitflags::iter::IterNames::__private_const_new(<EventFilter
as ::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
}
impl ::bitflags::__private::core::iter::IntoIterator for EventFilter {
type Item = EventFilter;
type IntoIter = ::bitflags::iter::Iter<EventFilter>;
fn into_iter(self) -> Self::IntoIter { self.iter() }
}
};Clone, #[automatically_derived]
impl ::core::marker::Copy for EventFilter { }Copy)]
106struct EventFilter: u16 {
107const GENERIC_ACTIVITIES = 1 << 0;
108const QUERY_PROVIDERS = 1 << 1;
109/// Store detailed instant events, including timestamp and thread ID,
110 /// per each query cache hit. Note that this is quite expensive.
111const QUERY_CACHE_HITS = 1 << 2;
112const QUERY_BLOCKED = 1 << 3;
113const INCR_CACHE_LOADS = 1 << 4;
114115const QUERY_KEYS = 1 << 5;
116const FUNCTION_ARGS = 1 << 6;
117const LLVM = 1 << 7;
118const INCR_RESULT_HASHING = 1 << 8;
119const ARTIFACT_SIZES = 1 << 9;
120/// Store aggregated counts of cache hits per query invocation.
121const QUERY_CACHE_HIT_COUNTS = 1 << 10;
122123const DEFAULT = Self::GENERIC_ACTIVITIES.bits() |
124Self::QUERY_PROVIDERS.bits() |
125Self::QUERY_BLOCKED.bits() |
126Self::INCR_CACHE_LOADS.bits() |
127Self::INCR_RESULT_HASHING.bits() |
128Self::ARTIFACT_SIZES.bits() |
129Self::QUERY_CACHE_HIT_COUNTS.bits();
130131const ARGS = Self::QUERY_KEYS.bits() | Self::FUNCTION_ARGS.bits();
132const QUERY_CACHE_HIT_COMBINED = Self::QUERY_CACHE_HITS.bits() | Self::QUERY_CACHE_HIT_COUNTS.bits();
133 }
134}
135136// keep this in sync with the `-Z self-profile-events` help message in rustc_session/options.rs
137const EVENT_FILTERS_BY_NAME: &[(&str, EventFilter)] = &[
138 ("none", EventFilter::empty()),
139 ("all", EventFilter::all()),
140 ("default", EventFilter::DEFAULT),
141 ("generic-activity", EventFilter::GENERIC_ACTIVITIES),
142 ("query-provider", EventFilter::QUERY_PROVIDERS),
143 ("query-cache-hit", EventFilter::QUERY_CACHE_HITS),
144 ("query-cache-hit-count", EventFilter::QUERY_CACHE_HIT_COUNTS),
145 ("query-blocked", EventFilter::QUERY_BLOCKED),
146 ("incr-cache-load", EventFilter::INCR_CACHE_LOADS),
147 ("query-keys", EventFilter::QUERY_KEYS),
148 ("function-args", EventFilter::FUNCTION_ARGS),
149 ("args", EventFilter::ARGS),
150 ("llvm", EventFilter::LLVM),
151 ("incr-result-hashing", EventFilter::INCR_RESULT_HASHING),
152 ("artifact-sizes", EventFilter::ARTIFACT_SIZES),
153];
154155/// Something that uniquely identifies a query invocation.
156pub struct QueryInvocationId(pub u32);
157158/// Which format to use for `-Z time-passes`
159#[derive(#[automatically_derived]
impl ::core::clone::Clone for TimePassesFormat {
#[inline]
fn clone(&self) -> TimePassesFormat { *self }
}Clone, #[automatically_derived]
impl ::core::marker::Copy for TimePassesFormat { }Copy, #[automatically_derived]
impl ::core::cmp::PartialEq for TimePassesFormat {
#[inline]
fn eq(&self, other: &TimePassesFormat) -> bool {
let __self_discr = ::core::intrinsics::discriminant_value(self);
let __arg1_discr = ::core::intrinsics::discriminant_value(other);
__self_discr == __arg1_discr
}
}PartialEq, #[automatically_derived]
impl ::core::hash::Hash for TimePassesFormat {
#[inline]
fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) {
let __self_discr = ::core::intrinsics::discriminant_value(self);
::core::hash::Hash::hash(&__self_discr, state)
}
}Hash, #[automatically_derived]
impl ::core::fmt::Debug for TimePassesFormat {
#[inline]
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
::core::fmt::Formatter::write_str(f,
match self {
TimePassesFormat::Text => "Text",
TimePassesFormat::Json => "Json",
})
}
}Debug)]
160pub enum TimePassesFormat {
161/// Emit human readable text
162Text,
163/// Emit structured JSON
164Json,
165}
166167/// A reference to the SelfProfiler. It can be cloned and sent across thread
168/// boundaries at will.
169#[derive(#[automatically_derived]
impl ::core::clone::Clone for SelfProfilerRef {
#[inline]
fn clone(&self) -> SelfProfilerRef {
SelfProfilerRef {
profiler: ::core::clone::Clone::clone(&self.profiler),
event_filter_mask: ::core::clone::Clone::clone(&self.event_filter_mask),
print_verbose_generic_activities: ::core::clone::Clone::clone(&self.print_verbose_generic_activities),
}
}
}Clone)]
170pub struct SelfProfilerRef {
171// This field is `None` if self-profiling is disabled for the current
172 // compilation session.
173profiler: Option<Arc<SelfProfiler>>,
174175// We store the filter mask directly in the reference because that doesn't
176 // cost anything and allows for filtering with checking if the profiler is
177 // actually enabled.
178event_filter_mask: EventFilter,
179180// Print verbose generic activities to stderr.
181print_verbose_generic_activities: Option<TimePassesFormat>,
182}
183184impl SelfProfilerRef {
185pub fn new(
186 profiler: Option<Arc<SelfProfiler>>,
187 print_verbose_generic_activities: Option<TimePassesFormat>,
188 ) -> SelfProfilerRef {
189// If there is no SelfProfiler then the filter mask is set to NONE,
190 // ensuring that nothing ever tries to actually access it.
191let event_filter_mask =
192profiler.as_ref().map_or(EventFilter::empty(), |p| p.event_filter_mask);
193194SelfProfilerRef { profiler, event_filter_mask, print_verbose_generic_activities }
195 }
196197/// This shim makes sure that calls only get executed if the filter mask
198 /// lets them pass. It also contains some trickery to make sure that
199 /// code is optimized for non-profiling compilation sessions, i.e. anything
200 /// past the filter check is never inlined so it doesn't clutter the fast
201 /// path.
202#[inline(always)]
203fn exec<F>(&self, event_filter: EventFilter, f: F) -> TimingGuard<'_>
204where
205F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
206 {
207#[inline(never)]
208 #[cold]
209fn cold_call<F>(profiler_ref: &SelfProfilerRef, f: F) -> TimingGuard<'_>
210where
211F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
212 {
213let profiler = profiler_ref.profiler.as_ref().unwrap();
214f(profiler)
215 }
216217if self.event_filter_mask.contains(event_filter) {
218cold_call(self, f)
219 } else {
220TimingGuard::none()
221 }
222 }
223224/// Start profiling a verbose generic activity. Profiling continues until the
225 /// VerboseTimingGuard returned from this call is dropped. In addition to recording
226 /// a measureme event, "verbose" generic activities also print a timing entry to
227 /// stderr if the compiler is invoked with -Ztime-passes.
228pub fn verbose_generic_activity(&self, event_label: &'static str) -> VerboseTimingGuard<'_> {
229let message_and_format =
230self.print_verbose_generic_activities.map(|format| (event_label.to_owned(), format));
231232VerboseTimingGuard::start(message_and_format, self.generic_activity(event_label))
233 }
234235/// Like `verbose_generic_activity`, but with an extra arg.
236pub fn verbose_generic_activity_with_arg<A>(
237&self,
238 event_label: &'static str,
239 event_arg: A,
240 ) -> VerboseTimingGuard<'_>
241where
242A: Borrow<str> + Into<String>,
243 {
244let message_and_format = self245 .print_verbose_generic_activities
246 .map(|format| (::alloc::__export::must_use({
::alloc::fmt::format(format_args!("{0}({1})", event_label,
event_arg.borrow()))
})format!("{}({})", event_label, event_arg.borrow()), format));
247248VerboseTimingGuard::start(
249message_and_format,
250self.generic_activity_with_arg(event_label, event_arg),
251 )
252 }
253254/// Start profiling a generic activity. Profiling continues until the
255 /// TimingGuard returned from this call is dropped.
256#[inline(always)]
257pub fn generic_activity(&self, event_label: &'static str) -> TimingGuard<'_> {
258self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
259let event_label = profiler.get_or_alloc_cached_string(event_label);
260let event_id = EventId::from_label(event_label);
261TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
262 })
263 }
264265/// Start profiling with some event filter for a given event. Profiling continues until the
266 /// TimingGuard returned from this call is dropped.
267#[inline(always)]
268pub fn generic_activity_with_event_id(&self, event_id: EventId) -> TimingGuard<'_> {
269self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
270TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
271 })
272 }
273274/// Start profiling a generic activity. Profiling continues until the
275 /// TimingGuard returned from this call is dropped.
276#[inline(always)]
277pub fn generic_activity_with_arg<A>(
278&self,
279 event_label: &'static str,
280 event_arg: A,
281 ) -> TimingGuard<'_>
282where
283A: Borrow<str> + Into<String>,
284 {
285self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
286let builder = EventIdBuilder::new(&profiler.profiler);
287let event_label = profiler.get_or_alloc_cached_string(event_label);
288let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
289let event_arg = profiler.get_or_alloc_cached_string(event_arg);
290builder.from_label_and_arg(event_label, event_arg)
291 } else {
292builder.from_label(event_label)
293 };
294TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
295 })
296 }
297298/// Start profiling a generic activity, allowing costly arguments to be recorded. Profiling
299 /// continues until the `TimingGuard` returned from this call is dropped.
300 ///
301 /// If the arguments to a generic activity are cheap to create, use `generic_activity_with_arg`
302 /// or `generic_activity_with_args` for their simpler API. However, if they are costly or
303 /// require allocation in sufficiently hot contexts, then this allows for a closure to be called
304 /// only when arguments were asked to be recorded via `-Z self-profile-events=args`.
305 ///
306 /// In this case, the closure will be passed a `&mut EventArgRecorder`, to help with recording
307 /// one or many arguments within the generic activity being profiled, by calling its
308 /// `record_arg` method for example.
309 ///
310 /// This `EventArgRecorder` may implement more specific traits from other rustc crates, e.g. for
311 /// richer handling of rustc-specific argument types, while keeping this single entry-point API
312 /// for recording arguments.
313 ///
314 /// Note: recording at least one argument is *required* for the self-profiler to create the
315 /// `TimingGuard`. A panic will be triggered if that doesn't happen. This function exists
316 /// explicitly to record arguments, so it fails loudly when there are none to record.
317 ///
318#[inline(always)]
319pub fn generic_activity_with_arg_recorder<F>(
320&self,
321 event_label: &'static str,
322mut f: F,
323 ) -> TimingGuard<'_>
324where
325F: FnMut(&mut EventArgRecorder<'_>),
326 {
327// Ensure this event will only be recorded when self-profiling is turned on.
328self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
329let builder = EventIdBuilder::new(&profiler.profiler);
330let event_label = profiler.get_or_alloc_cached_string(event_label);
331332// Ensure the closure to create event arguments will only be called when argument
333 // recording is turned on.
334let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
335// Set up the builder and call the user-provided closure to record potentially
336 // costly event arguments.
337let mut recorder = EventArgRecorder { profiler, args: SmallVec::new() };
338f(&mut recorder);
339340// It is expected that the closure will record at least one argument. If that
341 // doesn't happen, it's a bug: we've been explicitly called in order to record
342 // arguments, so we fail loudly when there are none to record.
343if recorder.args.is_empty() {
344{
::core::panicking::panic_fmt(format_args!("The closure passed to `generic_activity_with_arg_recorder` needs to record at least one argument"));
};panic!(
345"The closure passed to `generic_activity_with_arg_recorder` needs to \
346 record at least one argument"
347);
348 }
349350builder.from_label_and_args(event_label, &recorder.args)
351 } else {
352builder.from_label(event_label)
353 };
354TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
355 })
356 }
357358/// Record the size of an artifact that the compiler produces
359 ///
360 /// `artifact_kind` is the class of artifact (e.g., query_cache, object_file, etc.)
361 /// `artifact_name` is an identifier to the specific artifact being stored (usually a filename)
362#[inline(always)]
363pub fn artifact_size<A>(&self, artifact_kind: &str, artifact_name: A, size: u64)
364where
365A: Borrow<str> + Into<String>,
366 {
367drop(self.exec(EventFilter::ARTIFACT_SIZES, |profiler| {
368let builder = EventIdBuilder::new(&profiler.profiler);
369let event_label = profiler.get_or_alloc_cached_string(artifact_kind);
370let event_arg = profiler.get_or_alloc_cached_string(artifact_name);
371let event_id = builder.from_label_and_arg(event_label, event_arg);
372let thread_id = get_thread_id();
373374profiler.profiler.record_integer_event(
375profiler.artifact_size_event_kind,
376event_id,
377thread_id,
378size,
379 );
380381TimingGuard::none()
382 }))
383 }
384385#[inline(always)]
386pub fn generic_activity_with_args(
387&self,
388 event_label: &'static str,
389 event_args: &[String],
390 ) -> TimingGuard<'_> {
391self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
392let builder = EventIdBuilder::new(&profiler.profiler);
393let event_label = profiler.get_or_alloc_cached_string(event_label);
394let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
395let event_args: Vec<_> = event_args396 .iter()
397 .map(|s| profiler.get_or_alloc_cached_string(&s[..]))
398 .collect();
399builder.from_label_and_args(event_label, &event_args)
400 } else {
401builder.from_label(event_label)
402 };
403TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
404 })
405 }
406407/// Start profiling a query provider. Profiling continues until the
408 /// TimingGuard returned from this call is dropped.
409#[inline(always)]
410pub fn query_provider(&self) -> TimingGuard<'_> {
411self.exec(EventFilter::QUERY_PROVIDERS, |profiler| {
412TimingGuard::start(profiler, profiler.query_event_kind, EventId::INVALID)
413 })
414 }
415416/// Record a query in-memory cache hit.
417#[inline(always)]
418pub fn query_cache_hit(&self, query_invocation_id: QueryInvocationId) {
419#[inline(never)]
420 #[cold]
421fn cold_call(profiler_ref: &SelfProfilerRef, query_invocation_id: QueryInvocationId) {
422if profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
423profiler_ref424 .profiler
425 .as_ref()
426 .unwrap()
427 .increment_query_cache_hit_counters(QueryInvocationId(query_invocation_id.0));
428 }
429if profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HITS) {
430 hint::cold_path();
431profiler_ref.instant_query_event(
432 |profiler| profiler.query_cache_hit_event_kind,
433query_invocation_id,
434 );
435 }
436 }
437438// We check both kinds of query cache hit events at once, to reduce overhead in the
439 // common case (with self-profile disabled).
440if self.event_filter_mask.intersects(EventFilter::QUERY_CACHE_HIT_COMBINED) {
441 hint::cold_path();
442cold_call(self, query_invocation_id);
443 }
444 }
445446/// Start profiling a query being blocked on a concurrent execution.
447 /// Profiling continues until the TimingGuard returned from this call is
448 /// dropped.
449#[inline(always)]
450pub fn query_blocked(&self) -> TimingGuard<'_> {
451self.exec(EventFilter::QUERY_BLOCKED, |profiler| {
452TimingGuard::start(profiler, profiler.query_blocked_event_kind, EventId::INVALID)
453 })
454 }
455456/// Start profiling how long it takes to load a query result from the
457 /// incremental compilation on-disk cache. Profiling continues until the
458 /// TimingGuard returned from this call is dropped.
459#[inline(always)]
460pub fn incr_cache_loading(&self) -> TimingGuard<'_> {
461self.exec(EventFilter::INCR_CACHE_LOADS, |profiler| {
462TimingGuard::start(
463profiler,
464profiler.incremental_load_result_event_kind,
465EventId::INVALID,
466 )
467 })
468 }
469470/// Start profiling how long it takes to hash query results for incremental compilation.
471 /// Profiling continues until the TimingGuard returned from this call is dropped.
472#[inline(always)]
473pub fn incr_result_hashing(&self) -> TimingGuard<'_> {
474self.exec(EventFilter::INCR_RESULT_HASHING, |profiler| {
475TimingGuard::start(
476profiler,
477profiler.incremental_result_hashing_event_kind,
478EventId::INVALID,
479 )
480 })
481 }
482483#[inline(always)]
484fn instant_query_event(
485&self,
486 event_kind: fn(&SelfProfiler) -> StringId,
487 query_invocation_id: QueryInvocationId,
488 ) {
489let event_id = StringId::new_virtual(query_invocation_id.0);
490let thread_id = get_thread_id();
491let profiler = self.profiler.as_ref().unwrap();
492profiler.profiler.record_instant_event(
493event_kind(profiler),
494EventId::from_virtual(event_id),
495thread_id,
496 );
497 }
498499pub fn with_profiler(&self, f: impl FnOnce(&SelfProfiler)) {
500if let Some(profiler) = &self.profiler {
501f(profiler)
502 }
503 }
504505/// Gets a `StringId` for the given string. This method makes sure that
506 /// any strings going through it will only be allocated once in the
507 /// profiling data.
508 /// Returns `None` if the self-profiling is not enabled.
509pub fn get_or_alloc_cached_string(&self, s: &str) -> Option<StringId> {
510self.profiler.as_ref().map(|p| p.get_or_alloc_cached_string(s))
511 }
512513/// Store query cache hits to the self-profile log.
514 /// Should be called once at the end of the compilation session.
515 ///
516 /// The cache hits are stored per **query invocation**, not **per query kind/type**.
517 /// `analyzeme` can later deduplicate individual query labels from the QueryInvocationId event
518 /// IDs.
519pub fn store_query_cache_hits(&self) {
520if self.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
521let profiler = self.profiler.as_ref().unwrap();
522let query_hits = profiler.query_hits.read();
523let builder = EventIdBuilder::new(&profiler.profiler);
524let thread_id = get_thread_id();
525for (query_invocation, hit_count) in query_hits.iter().enumerate() {
526let hit_count = hit_count.load(Ordering::Relaxed);
527// No need to record empty cache hit counts
528if hit_count > 0 {
529let event_id =
530 builder.from_label(StringId::new_virtual(query_invocation as u64));
531 profiler.profiler.record_integer_event(
532 profiler.query_cache_hit_count_event_kind,
533 event_id,
534 thread_id,
535 hit_count,
536 );
537 }
538 }
539 }
540 }
541542#[inline]
543pub fn enabled(&self) -> bool {
544self.profiler.is_some()
545 }
546547#[inline]
548pub fn llvm_recording_enabled(&self) -> bool {
549self.event_filter_mask.contains(EventFilter::LLVM)
550 }
551#[inline]
552pub fn get_self_profiler(&self) -> Option<Arc<SelfProfiler>> {
553self.profiler.clone()
554 }
555556/// Is expensive recording of query keys and/or function arguments enabled?
557pub fn is_args_recording_enabled(&self) -> bool {
558self.enabled() && self.event_filter_mask.intersects(EventFilter::ARGS)
559 }
560}
561562/// A helper for recording costly arguments to self-profiling events. Used with
563/// `SelfProfilerRef::generic_activity_with_arg_recorder`.
564pub struct EventArgRecorder<'p> {
565/// The `SelfProfiler` used to intern the event arguments that users will ask to record.
566profiler: &'p SelfProfiler,
567568/// The interned event arguments to be recorded in the generic activity event.
569 ///
570 /// The most common case, when actually recording event arguments, is to have one argument. Then
571 /// followed by recording two, in a couple places.
572args: SmallVec<[StringId; 2]>,
573}
574575impl EventArgRecorder<'_> {
576/// Records a single argument within the current generic activity being profiled.
577 ///
578 /// Note: when self-profiling with costly event arguments, at least one argument
579 /// needs to be recorded. A panic will be triggered if that doesn't happen.
580pub fn record_arg<A>(&mut self, event_arg: A)
581where
582A: Borrow<str> + Into<String>,
583 {
584let event_arg = self.profiler.get_or_alloc_cached_string(event_arg);
585self.args.push(event_arg);
586 }
587}
588589pub struct SelfProfiler {
590 profiler: Profiler,
591 event_filter_mask: EventFilter,
592593 string_cache: RwLock<FxHashMap<String, StringId>>,
594595/// Recording individual query cache hits as "instant" measureme events
596 /// is incredibly expensive. Instead of doing that, we simply aggregate
597 /// cache hit *counts* per query invocation, and then store the final count
598 /// of cache hits per invocation at the end of the compilation session.
599 ///
600 /// With this approach, we don't know the individual thread IDs and timestamps
601 /// of cache hits, but it has very little overhead on top of `-Zself-profile`.
602 /// Recording the cache hits as individual events made compilation 3-5x slower.
603 ///
604 /// Query invocation IDs should be monotonic integers, so we can store them in a vec,
605 /// rather than using a hashmap.
606query_hits: RwLock<Vec<AtomicU64>>,
607608 query_event_kind: StringId,
609 generic_activity_event_kind: StringId,
610 incremental_load_result_event_kind: StringId,
611 incremental_result_hashing_event_kind: StringId,
612 query_blocked_event_kind: StringId,
613 query_cache_hit_event_kind: StringId,
614 artifact_size_event_kind: StringId,
615/// Total cache hits per query invocation
616query_cache_hit_count_event_kind: StringId,
617}
618619impl SelfProfiler {
620pub fn new(
621 output_directory: &Path,
622 crate_name: Option<&str>,
623 event_filters: Option<&[String]>,
624 counter_name: &str,
625 ) -> Result<SelfProfiler, Box<dyn Error + Send + Sync>> {
626 fs::create_dir_all(output_directory)?;
627628let crate_name = crate_name.unwrap_or("unknown-crate");
629// HACK(eddyb) we need to pad the PID, strange as it may seem, as its
630 // length can behave as a source of entropy for heap addresses, when
631 // ASLR is disabled and the heap is otherwise deterministic.
632let pid: u32 = process::id();
633let filename = ::alloc::__export::must_use({
::alloc::fmt::format(format_args!("{0}-{1:07}.rustc_profile",
crate_name, pid))
})format!("{crate_name}-{pid:07}.rustc_profile");
634let path = output_directory.join(filename);
635let profiler =
636Profiler::with_counter(&path, measureme::counters::Counter::by_name(counter_name)?)?;
637638let query_event_kind = profiler.alloc_string("Query");
639let generic_activity_event_kind = profiler.alloc_string("GenericActivity");
640let incremental_load_result_event_kind = profiler.alloc_string("IncrementalLoadResult");
641let incremental_result_hashing_event_kind =
642profiler.alloc_string("IncrementalResultHashing");
643let query_blocked_event_kind = profiler.alloc_string("QueryBlocked");
644let query_cache_hit_event_kind = profiler.alloc_string("QueryCacheHit");
645let artifact_size_event_kind = profiler.alloc_string("ArtifactSize");
646let query_cache_hit_count_event_kind = profiler.alloc_string("QueryCacheHitCount");
647648let mut event_filter_mask = EventFilter::empty();
649650if let Some(event_filters) = event_filters {
651let mut unknown_events = ::alloc::vec::Vec::new()vec![];
652for item in event_filters {
653if let Some(&(_, mask)) =
654 EVENT_FILTERS_BY_NAME.iter().find(|&(name, _)| name == item)
655 {
656 event_filter_mask |= mask;
657 } else {
658 unknown_events.push(item.clone());
659 }
660 }
661662// Warn about any unknown event names
663if !unknown_events.is_empty() {
664unknown_events.sort();
665unknown_events.dedup();
666667{
use ::tracing::__macro_support::Callsite as _;
static __CALLSITE: ::tracing::callsite::DefaultCallsite =
{
static META: ::tracing::Metadata<'static> =
{
::tracing_core::metadata::Metadata::new("event compiler/rustc_data_structures/src/profiling.rs:667",
"rustc_data_structures::profiling", ::tracing::Level::WARN,
::tracing_core::__macro_support::Option::Some("compiler/rustc_data_structures/src/profiling.rs"),
::tracing_core::__macro_support::Option::Some(667u32),
::tracing_core::__macro_support::Option::Some("rustc_data_structures::profiling"),
::tracing_core::field::FieldSet::new(&["message"],
::tracing_core::callsite::Identifier(&__CALLSITE)),
::tracing::metadata::Kind::EVENT)
};
::tracing::callsite::DefaultCallsite::new(&META)
};
let enabled =
::tracing::Level::WARN <= ::tracing::level_filters::STATIC_MAX_LEVEL
&&
::tracing::Level::WARN <=
::tracing::level_filters::LevelFilter::current() &&
{
let interest = __CALLSITE.interest();
!interest.is_never() &&
::tracing::__macro_support::__is_enabled(__CALLSITE.metadata(),
interest)
};
if enabled {
(|value_set: ::tracing::field::ValueSet|
{
let meta = __CALLSITE.metadata();
::tracing::Event::dispatch(meta, &value_set);
;
})({
#[allow(unused_imports)]
use ::tracing::field::{debug, display, Value};
let mut iter = __CALLSITE.metadata().fields().iter();
__CALLSITE.metadata().fields().value_set(&[(&::tracing::__macro_support::Iterator::next(&mut iter).expect("FieldSet corrupted (this is a bug)"),
::tracing::__macro_support::Option::Some(&format_args!("Unknown self-profiler events specified: {0}. Available options are: {1}.",
unknown_events.join(", "),
EVENT_FILTERS_BY_NAME.iter().map(|&(name, _)|
name.to_string()).collect::<Vec<_>>().join(", ")) as
&dyn Value))])
});
} else { ; }
};warn!(
668"Unknown self-profiler events specified: {}. Available options are: {}.",
669 unknown_events.join(", "),
670 EVENT_FILTERS_BY_NAME
671 .iter()
672 .map(|&(name, _)| name.to_string())
673 .collect::<Vec<_>>()
674 .join(", ")
675 );
676 }
677 } else {
678event_filter_mask = EventFilter::DEFAULT;
679 }
680681Ok(SelfProfiler {
682profiler,
683event_filter_mask,
684 string_cache: RwLock::new(FxHashMap::default()),
685query_event_kind,
686generic_activity_event_kind,
687incremental_load_result_event_kind,
688incremental_result_hashing_event_kind,
689query_blocked_event_kind,
690query_cache_hit_event_kind,
691artifact_size_event_kind,
692query_cache_hit_count_event_kind,
693 query_hits: Default::default(),
694 })
695 }
696697/// Allocates a new string in the profiling data. Does not do any caching
698 /// or deduplication.
699pub fn alloc_string<STR: SerializableString + ?Sized>(&self, s: &STR) -> StringId {
700self.profiler.alloc_string(s)
701 }
702703/// Store a cache hit of a query invocation
704pub fn increment_query_cache_hit_counters(&self, id: QueryInvocationId) {
705// Fast path: assume that the query was already encountered before, and just record
706 // a cache hit.
707let mut guard = self.query_hits.upgradable_read();
708let query_hits = &guard;
709let index = id.0 as usize;
710if index < query_hits.len() {
711// We only want to increment the count, no other synchronization is required
712query_hits[index].fetch_add(1, Ordering::Relaxed);
713 } else {
714// If not, we need to extend the query hit map to the highest observed ID
715guard.with_upgraded(|vec| {
716vec.resize_with(index + 1, || AtomicU64::new(0));
717vec[index] = AtomicU64::from(1);
718 });
719 }
720 }
721722/// Gets a `StringId` for the given string. This method makes sure that
723 /// any strings going through it will only be allocated once in the
724 /// profiling data.
725pub fn get_or_alloc_cached_string<A>(&self, s: A) -> StringId726where
727A: Borrow<str> + Into<String>,
728 {
729// Only acquire a read-lock first since we assume that the string is
730 // already present in the common case.
731{
732let string_cache = self.string_cache.read();
733734if let Some(&id) = string_cache.get(s.borrow()) {
735return id;
736 }
737 }
738739let mut string_cache = self.string_cache.write();
740// Check if the string has already been added in the small time window
741 // between dropping the read lock and acquiring the write lock.
742match string_cache.entry(s.into()) {
743 Entry::Occupied(e) => *e.get(),
744 Entry::Vacant(e) => {
745let string_id = self.profiler.alloc_string(&e.key()[..]);
746*e.insert(string_id)
747 }
748 }
749 }
750751pub fn map_query_invocation_id_to_string(&self, from: QueryInvocationId, to: StringId) {
752let from = StringId::new_virtual(from.0);
753self.profiler.map_virtual_to_concrete_string(from, to);
754 }
755756pub fn bulk_map_query_invocation_id_to_single_string<I>(&self, from: I, to: StringId)
757where
758I: Iterator<Item = QueryInvocationId> + ExactSizeIterator,
759 {
760let from = from.map(|qid| StringId::new_virtual(qid.0));
761self.profiler.bulk_map_virtual_to_single_concrete_string(from, to);
762 }
763764pub fn query_key_recording_enabled(&self) -> bool {
765self.event_filter_mask.contains(EventFilter::QUERY_KEYS)
766 }
767768pub fn event_id_builder(&self) -> EventIdBuilder<'_> {
769EventIdBuilder::new(&self.profiler)
770 }
771}
772773#[must_use]
774pub struct TimingGuard<'a>(Option<measureme::TimingGuard<'a>>);
775776impl<'a> TimingGuard<'a> {
777#[inline]
778pub fn start(
779 profiler: &'a SelfProfiler,
780 event_kind: StringId,
781 event_id: EventId,
782 ) -> TimingGuard<'a> {
783let thread_id = get_thread_id();
784let raw_profiler = &profiler.profiler;
785let timing_guard =
786raw_profiler.start_recording_interval_event(event_kind, event_id, thread_id);
787TimingGuard(Some(timing_guard))
788 }
789790#[inline]
791pub fn finish_with_query_invocation_id(self, query_invocation_id: QueryInvocationId) {
792if let Some(guard) = self.0 {
793outline(|| {
794let event_id = StringId::new_virtual(query_invocation_id.0);
795let event_id = EventId::from_virtual(event_id);
796guard.finish_with_override_event_id(event_id);
797 });
798 }
799 }
800801#[inline]
802pub fn none() -> TimingGuard<'a> {
803TimingGuard(None)
804 }
805806#[inline(always)]
807pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
808let _timer = self;
809f()
810 }
811}
812813struct VerboseInfo {
814 start_time: Instant,
815 start_rss: Option<usize>,
816 message: String,
817 format: TimePassesFormat,
818}
819820#[must_use]
821pub struct VerboseTimingGuard<'a> {
822 info: Option<VerboseInfo>,
823 _guard: TimingGuard<'a>,
824}
825826impl<'a> VerboseTimingGuard<'a> {
827pub fn start(
828 message_and_format: Option<(String, TimePassesFormat)>,
829 _guard: TimingGuard<'a>,
830 ) -> Self {
831VerboseTimingGuard {
832_guard,
833 info: message_and_format.map(|(message, format)| VerboseInfo {
834 start_time: Instant::now(),
835 start_rss: get_resident_set_size(),
836message,
837format,
838 }),
839 }
840 }
841842#[inline(always)]
843pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
844let _timer = self;
845f()
846 }
847}
848849impl Dropfor VerboseTimingGuard<'_> {
850fn drop(&mut self) {
851if let Some(info) = &self.info {
852let end_rss = get_resident_set_size();
853let dur = info.start_time.elapsed();
854print_time_passes_entry(&info.message, dur, info.start_rss, end_rss, info.format);
855 }
856 }
857}
858859struct JsonTimePassesEntry<'a> {
860 pass: &'a str,
861 time: f64,
862 start_rss: Option<usize>,
863 end_rss: Option<usize>,
864}
865866impl Displayfor JsonTimePassesEntry<'_> {
867fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
868let Self { pass: what, time, start_rss, end_rss } = self;
869f.write_fmt(format_args!("{{\"pass\":\"{0}\",\"time\":{1},\"rss_start\":",
what, time))write!(f, r#"{{"pass":"{what}","time":{time},"rss_start":"#).unwrap();
870match start_rss {
871Some(rss) => f.write_fmt(format_args!("{0}", rss))write!(f, "{rss}")?,
872None => f.write_fmt(format_args!("null"))write!(f, "null")?,
873 }
874f.write_fmt(format_args!(",\"rss_end\":"))write!(f, r#","rss_end":"#)?;
875match end_rss {
876Some(rss) => f.write_fmt(format_args!("{0}", rss))write!(f, "{rss}")?,
877None => f.write_fmt(format_args!("null"))write!(f, "null")?,
878 }
879f.write_fmt(format_args!("}}"))write!(f, "}}")?;
880Ok(())
881 }
882}
883884pub fn print_time_passes_entry(
885 what: &str,
886 dur: Duration,
887 start_rss: Option<usize>,
888 end_rss: Option<usize>,
889 format: TimePassesFormat,
890) {
891match format {
892 TimePassesFormat::Json => {
893let entry =
894JsonTimePassesEntry { pass: what, time: dur.as_secs_f64(), start_rss, end_rss };
895896{ ::std::io::_eprint(format_args!("time: {0}\n", entry)); };eprintln!(r#"time: {entry}"#);
897return;
898 }
899 TimePassesFormat::Text => (),
900 }
901902// Print the pass if its duration is greater than 5 ms, or it changed the
903 // measured RSS.
904let is_notable = || {
905if dur.as_millis() > 5 {
906return true;
907 }
908909if let (Some(start_rss), Some(end_rss)) = (start_rss, end_rss) {
910let change_rss = end_rss.abs_diff(start_rss);
911if change_rss > 0 {
912return true;
913 }
914 }
915916false
917};
918if !is_notable() {
919return;
920 }
921922let rss_to_mb = |rss| (rssas f64 / 1_000_000.0).round() as usize;
923let rss_change_to_mb = |rss| (rssas f64 / 1_000_000.0).round() as i128;
924925let mem_string = match (start_rss, end_rss) {
926 (Some(start_rss), Some(end_rss)) => {
927let change_rss = end_rssas i128 - start_rssas i128;
928929::alloc::__export::must_use({
::alloc::fmt::format(format_args!("; rss: {0:>4}MB -> {1:>4}MB ({2:>+5}MB)",
rss_to_mb(start_rss), rss_to_mb(end_rss),
rss_change_to_mb(change_rss)))
})format!(
930"; rss: {:>4}MB -> {:>4}MB ({:>+5}MB)",
931 rss_to_mb(start_rss),
932 rss_to_mb(end_rss),
933 rss_change_to_mb(change_rss),
934 )935 }
936 (Some(start_rss), None) => ::alloc::__export::must_use({
::alloc::fmt::format(format_args!("; rss start: {0:>4}MB",
rss_to_mb(start_rss)))
})format!("; rss start: {:>4}MB", rss_to_mb(start_rss)),
937 (None, Some(end_rss)) => ::alloc::__export::must_use({
::alloc::fmt::format(format_args!("; rss end: {0:>4}MB",
rss_to_mb(end_rss)))
})format!("; rss end: {:>4}MB", rss_to_mb(end_rss)),
938 (None, None) => String::new(),
939 };
940941{
::std::io::_eprint(format_args!("time: {0:>7}{1}\t{2}\n",
duration_to_secs_str(dur), mem_string, what));
};eprintln!("time: {:>7}{}\t{}", duration_to_secs_str(dur), mem_string, what);
942}
943944// Hack up our own formatting for the duration to make it easier for scripts
945// to parse (always use the same number of decimal places and the same unit).
946pub fn duration_to_secs_str(dur: std::time::Duration) -> String {
947::alloc::__export::must_use({
::alloc::fmt::format(format_args!("{0:.3}", dur.as_secs_f64()))
})format!("{:.3}", dur.as_secs_f64())948}
949950fn get_thread_id() -> u32 {
951 std::thread::current().id().as_u64().get() as u32952}
953954// Memory reporting
955cfg_select! {
956 windows => {
957pub fn get_resident_set_size() -> Option<usize> {
958use windows::{
959 Win32::System::ProcessStatus::{K32GetProcessMemoryInfo, PROCESS_MEMORY_COUNTERS},
960 Win32::System::Threading::GetCurrentProcess,
961 };
962963let mut pmc = PROCESS_MEMORY_COUNTERS::default();
964let pmc_size = size_of_val(&pmc);
965unsafe {
966 K32GetProcessMemoryInfo(
967 GetCurrentProcess(),
968&mut pmc,
969 pmc_size as u32,
970 )
971 }
972 .ok()
973 .ok()?;
974975Some(pmc.WorkingSetSize)
976 }
977 }
978 target_os = "macos" => {
979pub fn get_resident_set_size() -> Option<usize> {
980use libc::{c_int, c_void, getpid, proc_pidinfo, proc_taskinfo, PROC_PIDTASKINFO};
981use std::mem;
982const PROC_TASKINFO_SIZE: c_int = size_of::<proc_taskinfo>() as c_int;
983984unsafe {
985let mut info: proc_taskinfo = mem::zeroed();
986let info_ptr = &mut info as *mut proc_taskinfo as *mut c_void;
987let pid = getpid() as c_int;
988let ret = proc_pidinfo(pid, PROC_PIDTASKINFO, 0, info_ptr, PROC_TASKINFO_SIZE);
989if ret == PROC_TASKINFO_SIZE {
990Some(info.pti_resident_size as usize)
991 } else {
992None
993}
994 }
995 }
996 }
997 unix => {
998pub fn get_resident_set_size() -> Option<usize> {
999use libc::{sysconf, _SC_PAGESIZE};
1000let field = 1;
1001let contents = fs::read("/proc/self/statm").ok()?;
1002let contents = String::from_utf8(contents).ok()?;
1003let s = contents.split_whitespace().nth(field)?;
1004let npages = s.parse::<usize>().ok()?;
1005// SAFETY: `sysconf(_SC_PAGESIZE)` has no side effects and is safe to call.
1006Some(npages * unsafe { sysconf(_SC_PAGESIZE) } as usize)
1007 }
1008 }
1009_ => {
1010pub fn get_resident_set_size() -> Option<usize> {
1011None
1012}
1013 }
1014}
10151016#[cfg(test)]
1017mod tests;