1//! # Rust Compiler Self-Profiling
2//!
3//! This module implements the basic framework for the compiler's self-
4//! profiling support. It provides the `SelfProfiler` type which enables
5//! recording "events". An event is something that starts and ends at a given
6//! point in time and has an ID and a kind attached to it. This allows for
7//! tracing the compiler's activity.
8//!
9//! Internally this module uses the custom tailored [measureme][mm] crate for
10//! efficiently recording events to disk in a compact format that can be
11//! post-processed and analyzed by the suite of tools in the `measureme`
12//! project. The highest priority for the tracing framework is on incurring as
13//! little overhead as possible.
14//!
15//!
16//! ## Event Overview
17//!
18//! Events have a few properties:
19//!
20//! - The `event_kind` designates the broad category of an event (e.g. does it
21//! correspond to the execution of a query provider or to loading something
22//! from the incr. comp. on-disk cache, etc).
23//! - The `event_id` designates the query invocation or function call it
24//! corresponds to, possibly including the query key or function arguments.
25//! - Each event stores the ID of the thread it was recorded on.
26//! - The timestamp stores beginning and end of the event, or the single point
27//! in time it occurred at for "instant" events.
28//!
29//!
30//! ## Event Filtering
31//!
32//! Event generation can be filtered by event kind. Recording all possible
33//! events generates a lot of data, much of which is not needed for most kinds
34//! of analysis. So, in order to keep overhead as low as possible for a given
35//! use case, the `SelfProfiler` will only record the kinds of events that
36//! pass the filter specified as a command line argument to the compiler.
37//!
38//!
39//! ## `event_id` Assignment
40//!
41//! As far as `measureme` is concerned, `event_id`s are just strings. However,
42//! it would incur too much overhead to generate and persist each `event_id`
43//! string at the point where the event is recorded. In order to make this more
44//! efficient `measureme` has two features:
45//!
46//! - Strings can share their content, so that re-occurring parts don't have to
47//! be copied over and over again. One allocates a string in `measureme` and
48//! gets back a `StringId`. This `StringId` is then used to refer to that
49//! string. `measureme` strings are actually DAGs of string components so that
50//! arbitrary sharing of substrings can be done efficiently. This is useful
51//! because `event_id`s contain lots of redundant text like query names or
52//! def-path components.
53//!
54//! - `StringId`s can be "virtual" which means that the client picks a numeric
55//! ID according to some application-specific scheme and can later make that
56//! ID be mapped to an actual string. This is used to cheaply generate
57//! `event_id`s while the events actually occur, causing little timing
58//! distortion, and then later map those `StringId`s, in bulk, to actual
59//! `event_id` strings. This way the largest part of the tracing overhead is
60//! localized to one contiguous chunk of time.
61//!
62//! How are these `event_id`s generated in the compiler? For things that occur
63//! infrequently (e.g. "generic activities"), we just allocate the string the
64//! first time it is used and then keep the `StringId` in a hash table. This
65//! is implemented in `SelfProfiler::get_or_alloc_cached_string()`.
66//!
67//! For queries it gets more interesting: First we need a unique numeric ID for
68//! each query invocation (the `QueryInvocationId`). This ID is used as the
69//! virtual `StringId` we use as `event_id` for a given event. This ID has to
70//! be available both when the query is executed and later, together with the
71//! query key, when we allocate the actual `event_id` strings in bulk.
72//!
73//! We could make the compiler generate and keep track of such an ID for each
74//! query invocation but luckily we already have something that fits all the
75//! the requirements: the query's `DepNodeIndex`. So we use the numeric value
76//! of the `DepNodeIndex` as `event_id` when recording the event and then,
77//! just before the query context is dropped, we walk the entire query cache
78//! (which stores the `DepNodeIndex` along with the query key for each
79//! invocation) and allocate the corresponding strings together with a mapping
80//! for `DepNodeIndex as StringId`.
81//!
82//! [mm]: https://github.com/rust-lang/measureme/
8384use std::borrow::Borrow;
85use std::collections::hash_map::Entry;
86use std::error::Error;
87use std::fmt::Display;
88use std::intrinsics::unlikely;
89use std::path::Path;
90use std::sync::Arc;
91use std::sync::atomic::Ordering;
92use std::time::{Duration, Instant};
93use std::{fs, process};
9495pub use measureme::EventId;
96use measureme::{EventIdBuilder, Profiler, SerializableString, StringId};
97use parking_lot::RwLock;
98use smallvec::SmallVec;
99use tracing::warn;
100101use crate::fx::FxHashMap;
102use crate::outline;
103use crate::sync::AtomicU64;
104105bitflags::bitflags! {
106#[derive(#[automatically_derived]
impl ::core::clone::Clone for EventFilter {
#[inline]
fn clone(&self) -> EventFilter {
let _:
::core::clone::AssertParamIsClone<<EventFilter as
::bitflags::__private::PublicFlags>::Internal>;
*self
}
}
impl EventFilter {
#[allow(deprecated, non_upper_case_globals,)]
pub const GENERIC_ACTIVITIES: Self = Self::from_bits_retain(1 << 0);
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_PROVIDERS: Self = Self::from_bits_retain(1 << 1);
#[doc =
r" Store detailed instant events, including timestamp and thread ID,"]
#[doc = r" per each query cache hit. Note that this is quite expensive."]
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_CACHE_HITS: Self = Self::from_bits_retain(1 << 2);
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_BLOCKED: Self = Self::from_bits_retain(1 << 3);
#[allow(deprecated, non_upper_case_globals,)]
pub const INCR_CACHE_LOADS: Self = Self::from_bits_retain(1 << 4);
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_KEYS: Self = Self::from_bits_retain(1 << 5);
#[allow(deprecated, non_upper_case_globals,)]
pub const FUNCTION_ARGS: Self = Self::from_bits_retain(1 << 6);
#[allow(deprecated, non_upper_case_globals,)]
pub const LLVM: Self = Self::from_bits_retain(1 << 7);
#[allow(deprecated, non_upper_case_globals,)]
pub const INCR_RESULT_HASHING: Self = Self::from_bits_retain(1 << 8);
#[allow(deprecated, non_upper_case_globals,)]
pub const ARTIFACT_SIZES: Self = Self::from_bits_retain(1 << 9);
#[doc = r" Store aggregated counts of cache hits per query invocation."]
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_CACHE_HIT_COUNTS: Self = Self::from_bits_retain(1 << 10);
#[allow(deprecated, non_upper_case_globals,)]
pub const DEFAULT: Self =
Self::from_bits_retain(Self::GENERIC_ACTIVITIES.bits() |
Self::QUERY_PROVIDERS.bits() | Self::QUERY_BLOCKED.bits() |
Self::INCR_CACHE_LOADS.bits() |
Self::INCR_RESULT_HASHING.bits() |
Self::ARTIFACT_SIZES.bits() |
Self::QUERY_CACHE_HIT_COUNTS.bits());
#[allow(deprecated, non_upper_case_globals,)]
pub const ARGS: Self =
Self::from_bits_retain(Self::QUERY_KEYS.bits() |
Self::FUNCTION_ARGS.bits());
#[allow(deprecated, non_upper_case_globals,)]
pub const QUERY_CACHE_HIT_COMBINED: Self =
Self::from_bits_retain(Self::QUERY_CACHE_HITS.bits() |
Self::QUERY_CACHE_HIT_COUNTS.bits());
}
impl ::bitflags::Flags for EventFilter {
const FLAGS: &'static [::bitflags::Flag<EventFilter>] =
&[{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("GENERIC_ACTIVITIES",
EventFilter::GENERIC_ACTIVITIES)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_PROVIDERS",
EventFilter::QUERY_PROVIDERS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_CACHE_HITS",
EventFilter::QUERY_CACHE_HITS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_BLOCKED",
EventFilter::QUERY_BLOCKED)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("INCR_CACHE_LOADS",
EventFilter::INCR_CACHE_LOADS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_KEYS", EventFilter::QUERY_KEYS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("FUNCTION_ARGS",
EventFilter::FUNCTION_ARGS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("LLVM", EventFilter::LLVM)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("INCR_RESULT_HASHING",
EventFilter::INCR_RESULT_HASHING)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("ARTIFACT_SIZES",
EventFilter::ARTIFACT_SIZES)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_CACHE_HIT_COUNTS",
EventFilter::QUERY_CACHE_HIT_COUNTS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("DEFAULT", EventFilter::DEFAULT)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("ARGS", EventFilter::ARGS)
},
{
#[allow(deprecated, non_upper_case_globals,)]
::bitflags::Flag::new("QUERY_CACHE_HIT_COMBINED",
EventFilter::QUERY_CACHE_HIT_COMBINED)
}];
type Bits = u16;
fn bits(&self) -> u16 { EventFilter::bits(self) }
fn from_bits_retain(bits: u16) -> EventFilter {
EventFilter::from_bits_retain(bits)
}
}
#[allow(dead_code, deprecated, unused_doc_comments, unused_attributes,
unused_mut, unused_imports, non_upper_case_globals, clippy ::
assign_op_pattern, clippy :: indexing_slicing, clippy :: same_name_method,
clippy :: iter_without_into_iter,)]
const _: () =
{
#[repr(transparent)]
struct InternalBitFlags(u16);
#[automatically_derived]
#[doc(hidden)]
unsafe impl ::core::clone::TrivialClone for InternalBitFlags { }
#[automatically_derived]
impl ::core::clone::Clone for InternalBitFlags {
#[inline]
fn clone(&self) -> InternalBitFlags {
let _: ::core::clone::AssertParamIsClone<u16>;
*self
}
}
#[automatically_derived]
impl ::core::marker::Copy for InternalBitFlags { }
#[automatically_derived]
impl ::core::marker::StructuralPartialEq for InternalBitFlags { }
#[automatically_derived]
impl ::core::cmp::PartialEq for InternalBitFlags {
#[inline]
fn eq(&self, other: &InternalBitFlags) -> bool {
self.0 == other.0
}
}
#[automatically_derived]
impl ::core::cmp::Eq for InternalBitFlags {
#[inline]
#[doc(hidden)]
#[coverage(off)]
fn assert_receiver_is_total_eq(&self) -> () {
let _: ::core::cmp::AssertParamIsEq<u16>;
}
}
#[automatically_derived]
impl ::core::cmp::PartialOrd for InternalBitFlags {
#[inline]
fn partial_cmp(&self, other: &InternalBitFlags)
-> ::core::option::Option<::core::cmp::Ordering> {
::core::cmp::PartialOrd::partial_cmp(&self.0, &other.0)
}
}
#[automatically_derived]
impl ::core::cmp::Ord for InternalBitFlags {
#[inline]
fn cmp(&self, other: &InternalBitFlags) -> ::core::cmp::Ordering {
::core::cmp::Ord::cmp(&self.0, &other.0)
}
}
#[automatically_derived]
impl ::core::hash::Hash for InternalBitFlags {
#[inline]
fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) -> () {
::core::hash::Hash::hash(&self.0, state)
}
}
impl ::bitflags::__private::PublicFlags for EventFilter {
type Primitive = u16;
type Internal = InternalBitFlags;
}
impl ::bitflags::__private::core::default::Default for
InternalBitFlags {
#[inline]
fn default() -> Self { InternalBitFlags::empty() }
}
impl ::bitflags::__private::core::fmt::Debug for InternalBitFlags {
fn fmt(&self,
f: &mut ::bitflags::__private::core::fmt::Formatter<'_>)
-> ::bitflags::__private::core::fmt::Result {
if self.is_empty() {
f.write_fmt(format_args!("{0:#x}",
<u16 as ::bitflags::Bits>::EMPTY))
} else {
::bitflags::__private::core::fmt::Display::fmt(self, f)
}
}
}
impl ::bitflags::__private::core::fmt::Display for InternalBitFlags {
fn fmt(&self,
f: &mut ::bitflags::__private::core::fmt::Formatter<'_>)
-> ::bitflags::__private::core::fmt::Result {
::bitflags::parser::to_writer(&EventFilter(*self), f)
}
}
impl ::bitflags::__private::core::str::FromStr for InternalBitFlags {
type Err = ::bitflags::parser::ParseError;
fn from_str(s: &str)
->
::bitflags::__private::core::result::Result<Self,
Self::Err> {
::bitflags::parser::from_str::<EventFilter>(s).map(|flags|
flags.0)
}
}
impl ::bitflags::__private::core::convert::AsRef<u16> for
InternalBitFlags {
fn as_ref(&self) -> &u16 { &self.0 }
}
impl ::bitflags::__private::core::convert::From<u16> for
InternalBitFlags {
fn from(bits: u16) -> Self { Self::from_bits_retain(bits) }
}
#[allow(dead_code, deprecated, unused_attributes)]
impl InternalBitFlags {
/// Get a flags value with all bits unset.
#[inline]
pub const fn empty() -> Self {
Self(<u16 as ::bitflags::Bits>::EMPTY)
}
/// Get a flags value with all known bits set.
#[inline]
pub const fn all() -> Self {
let mut truncated = <u16 as ::bitflags::Bits>::EMPTY;
let mut i = 0;
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
{
{
let flag =
<EventFilter as ::bitflags::Flags>::FLAGS[i].value().bits();
truncated = truncated | flag;
i += 1;
}
};
let _ = i;
Self(truncated)
}
/// Get the underlying bits value.
///
/// The returned value is exactly the bits set in this flags value.
#[inline]
pub const fn bits(&self) -> u16 { self.0 }
/// Convert from a bits value.
///
/// This method will return `None` if any unknown bits are set.
#[inline]
pub const fn from_bits(bits: u16)
-> ::bitflags::__private::core::option::Option<Self> {
let truncated = Self::from_bits_truncate(bits).0;
if truncated == bits {
::bitflags::__private::core::option::Option::Some(Self(bits))
} else { ::bitflags::__private::core::option::Option::None }
}
/// Convert from a bits value, unsetting any unknown bits.
#[inline]
pub const fn from_bits_truncate(bits: u16) -> Self {
Self(bits & Self::all().0)
}
/// Convert from a bits value exactly.
#[inline]
pub const fn from_bits_retain(bits: u16) -> Self { Self(bits) }
/// Get a flags value with the bits of a flag with the given name set.
///
/// This method will return `None` if `name` is empty or doesn't
/// correspond to any named flag.
#[inline]
pub fn from_name(name: &str)
-> ::bitflags::__private::core::option::Option<Self> {
{
if name == "GENERIC_ACTIVITIES" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::GENERIC_ACTIVITIES.bits()));
}
};
;
{
if name == "QUERY_PROVIDERS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_PROVIDERS.bits()));
}
};
;
{
if name == "QUERY_CACHE_HITS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HITS.bits()));
}
};
;
{
if name == "QUERY_BLOCKED" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_BLOCKED.bits()));
}
};
;
{
if name == "INCR_CACHE_LOADS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::INCR_CACHE_LOADS.bits()));
}
};
;
{
if name == "QUERY_KEYS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_KEYS.bits()));
}
};
;
{
if name == "FUNCTION_ARGS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::FUNCTION_ARGS.bits()));
}
};
;
{
if name == "LLVM" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::LLVM.bits()));
}
};
;
{
if name == "INCR_RESULT_HASHING" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::INCR_RESULT_HASHING.bits()));
}
};
;
{
if name == "ARTIFACT_SIZES" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::ARTIFACT_SIZES.bits()));
}
};
;
{
if name == "QUERY_CACHE_HIT_COUNTS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HIT_COUNTS.bits()));
}
};
;
{
if name == "DEFAULT" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::DEFAULT.bits()));
}
};
;
{
if name == "ARGS" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::ARGS.bits()));
}
};
;
{
if name == "QUERY_CACHE_HIT_COMBINED" {
return ::bitflags::__private::core::option::Option::Some(Self(EventFilter::QUERY_CACHE_HIT_COMBINED.bits()));
}
};
;
let _ = name;
::bitflags::__private::core::option::Option::None
}
/// Whether all bits in this flags value are unset.
#[inline]
pub const fn is_empty(&self) -> bool {
self.0 == <u16 as ::bitflags::Bits>::EMPTY
}
/// Whether all known bits in this flags value are set.
#[inline]
pub const fn is_all(&self) -> bool {
Self::all().0 | self.0 == self.0
}
/// Whether any set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn intersects(&self, other: Self) -> bool {
self.0 & other.0 != <u16 as ::bitflags::Bits>::EMPTY
}
/// Whether all set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn contains(&self, other: Self) -> bool {
self.0 & other.0 == other.0
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
pub fn insert(&mut self, other: Self) {
*self = Self(self.0).union(other);
}
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `remove` won't truncate `other`, but the `!` operator will.
#[inline]
pub fn remove(&mut self, other: Self) {
*self = Self(self.0).difference(other);
}
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
pub fn toggle(&mut self, other: Self) {
*self = Self(self.0).symmetric_difference(other);
}
/// Call `insert` when `value` is `true` or `remove` when `value` is `false`.
#[inline]
pub fn set(&mut self, other: Self, value: bool) {
if value { self.insert(other); } else { self.remove(other); }
}
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn intersection(self, other: Self) -> Self {
Self(self.0 & other.0)
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn union(self, other: Self) -> Self {
Self(self.0 | other.0)
}
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
#[must_use]
pub const fn difference(self, other: Self) -> Self {
Self(self.0 & !other.0)
}
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn symmetric_difference(self, other: Self) -> Self {
Self(self.0 ^ other.0)
}
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
#[must_use]
pub const fn complement(self) -> Self {
Self::from_bits_truncate(!self.0)
}
}
impl ::bitflags::__private::core::fmt::Binary for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Binary::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::Octal for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Octal::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::LowerHex for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::LowerHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::UpperHex for InternalBitFlags {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::UpperHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::ops::BitOr for InternalBitFlags {
type Output = Self;
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor(self, other: InternalBitFlags) -> Self {
self.union(other)
}
}
impl ::bitflags::__private::core::ops::BitOrAssign for
InternalBitFlags {
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor_assign(&mut self, other: Self) { self.insert(other); }
}
impl ::bitflags::__private::core::ops::BitXor for InternalBitFlags {
type Output = Self;
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor(self, other: Self) -> Self {
self.symmetric_difference(other)
}
}
impl ::bitflags::__private::core::ops::BitXorAssign for
InternalBitFlags {
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor_assign(&mut self, other: Self) { self.toggle(other); }
}
impl ::bitflags::__private::core::ops::BitAnd for InternalBitFlags {
type Output = Self;
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand(self, other: Self) -> Self { self.intersection(other) }
}
impl ::bitflags::__private::core::ops::BitAndAssign for
InternalBitFlags {
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand_assign(&mut self, other: Self) {
*self =
Self::from_bits_retain(self.bits()).intersection(other);
}
}
impl ::bitflags::__private::core::ops::Sub for InternalBitFlags {
type Output = Self;
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub(self, other: Self) -> Self { self.difference(other) }
}
impl ::bitflags::__private::core::ops::SubAssign for InternalBitFlags
{
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub_assign(&mut self, other: Self) { self.remove(other); }
}
impl ::bitflags::__private::core::ops::Not for InternalBitFlags {
type Output = Self;
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
fn not(self) -> Self { self.complement() }
}
impl ::bitflags::__private::core::iter::Extend<InternalBitFlags> for
InternalBitFlags {
/// The bitwise or (`|`) of the bits in each flags value.
fn extend<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(&mut self, iterator: T) {
for item in iterator { self.insert(item) }
}
}
impl ::bitflags::__private::core::iter::FromIterator<InternalBitFlags>
for InternalBitFlags {
/// The bitwise or (`|`) of the bits in each flags value.
fn from_iter<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(iterator: T) -> Self {
use ::bitflags::__private::core::iter::Extend;
let mut result = Self::empty();
result.extend(iterator);
result
}
}
impl InternalBitFlags {
/// Yield a set of contained flags values.
///
/// Each yielded flags value will correspond to a defined named flag. Any unknown bits
/// will be yielded together as a final flags value.
#[inline]
pub const fn iter(&self) -> ::bitflags::iter::Iter<EventFilter> {
::bitflags::iter::Iter::__private_const_new(<EventFilter as
::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
/// Yield a set of contained named flags values.
///
/// This method is like [`iter`](#method.iter), except only yields bits in contained named flags.
/// Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
#[inline]
pub const fn iter_names(&self)
-> ::bitflags::iter::IterNames<EventFilter> {
::bitflags::iter::IterNames::__private_const_new(<EventFilter
as ::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
}
impl ::bitflags::__private::core::iter::IntoIterator for
InternalBitFlags {
type Item = EventFilter;
type IntoIter = ::bitflags::iter::Iter<EventFilter>;
fn into_iter(self) -> Self::IntoIter { self.iter() }
}
impl InternalBitFlags {
/// Returns a mutable reference to the raw value of the flags currently stored.
#[inline]
pub fn bits_mut(&mut self) -> &mut u16 { &mut self.0 }
}
#[allow(dead_code, deprecated, unused_attributes)]
impl EventFilter {
/// Get a flags value with all bits unset.
#[inline]
pub const fn empty() -> Self { Self(InternalBitFlags::empty()) }
/// Get a flags value with all known bits set.
#[inline]
pub const fn all() -> Self { Self(InternalBitFlags::all()) }
/// Get the underlying bits value.
///
/// The returned value is exactly the bits set in this flags value.
#[inline]
pub const fn bits(&self) -> u16 { self.0.bits() }
/// Convert from a bits value.
///
/// This method will return `None` if any unknown bits are set.
#[inline]
pub const fn from_bits(bits: u16)
-> ::bitflags::__private::core::option::Option<Self> {
match InternalBitFlags::from_bits(bits) {
::bitflags::__private::core::option::Option::Some(bits) =>
::bitflags::__private::core::option::Option::Some(Self(bits)),
::bitflags::__private::core::option::Option::None =>
::bitflags::__private::core::option::Option::None,
}
}
/// Convert from a bits value, unsetting any unknown bits.
#[inline]
pub const fn from_bits_truncate(bits: u16) -> Self {
Self(InternalBitFlags::from_bits_truncate(bits))
}
/// Convert from a bits value exactly.
#[inline]
pub const fn from_bits_retain(bits: u16) -> Self {
Self(InternalBitFlags::from_bits_retain(bits))
}
/// Get a flags value with the bits of a flag with the given name set.
///
/// This method will return `None` if `name` is empty or doesn't
/// correspond to any named flag.
#[inline]
pub fn from_name(name: &str)
-> ::bitflags::__private::core::option::Option<Self> {
match InternalBitFlags::from_name(name) {
::bitflags::__private::core::option::Option::Some(bits) =>
::bitflags::__private::core::option::Option::Some(Self(bits)),
::bitflags::__private::core::option::Option::None =>
::bitflags::__private::core::option::Option::None,
}
}
/// Whether all bits in this flags value are unset.
#[inline]
pub const fn is_empty(&self) -> bool { self.0.is_empty() }
/// Whether all known bits in this flags value are set.
#[inline]
pub const fn is_all(&self) -> bool { self.0.is_all() }
/// Whether any set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn intersects(&self, other: Self) -> bool {
self.0.intersects(other.0)
}
/// Whether all set bits in a source flags value are also set in a target flags value.
#[inline]
pub const fn contains(&self, other: Self) -> bool {
self.0.contains(other.0)
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
pub fn insert(&mut self, other: Self) { self.0.insert(other.0) }
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `remove` won't truncate `other`, but the `!` operator will.
#[inline]
pub fn remove(&mut self, other: Self) { self.0.remove(other.0) }
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
pub fn toggle(&mut self, other: Self) { self.0.toggle(other.0) }
/// Call `insert` when `value` is `true` or `remove` when `value` is `false`.
#[inline]
pub fn set(&mut self, other: Self, value: bool) {
self.0.set(other.0, value)
}
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn intersection(self, other: Self) -> Self {
Self(self.0.intersection(other.0))
}
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn union(self, other: Self) -> Self {
Self(self.0.union(other.0))
}
/// The intersection of a source flags value with the complement of a target flags
/// value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
#[must_use]
pub const fn difference(self, other: Self) -> Self {
Self(self.0.difference(other.0))
}
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
#[must_use]
pub const fn symmetric_difference(self, other: Self) -> Self {
Self(self.0.symmetric_difference(other.0))
}
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
#[must_use]
pub const fn complement(self) -> Self {
Self(self.0.complement())
}
}
impl ::bitflags::__private::core::fmt::Binary for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Binary::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::Octal for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::Octal::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::LowerHex for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::LowerHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::fmt::UpperHex for EventFilter {
fn fmt(&self, f: &mut ::bitflags::__private::core::fmt::Formatter)
-> ::bitflags::__private::core::fmt::Result {
let inner = self.0;
::bitflags::__private::core::fmt::UpperHex::fmt(&inner, f)
}
}
impl ::bitflags::__private::core::ops::BitOr for EventFilter {
type Output = Self;
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor(self, other: EventFilter) -> Self { self.union(other) }
}
impl ::bitflags::__private::core::ops::BitOrAssign for EventFilter {
/// The bitwise or (`|`) of the bits in two flags values.
#[inline]
fn bitor_assign(&mut self, other: Self) { self.insert(other); }
}
impl ::bitflags::__private::core::ops::BitXor for EventFilter {
type Output = Self;
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor(self, other: Self) -> Self {
self.symmetric_difference(other)
}
}
impl ::bitflags::__private::core::ops::BitXorAssign for EventFilter {
/// The bitwise exclusive-or (`^`) of the bits in two flags values.
#[inline]
fn bitxor_assign(&mut self, other: Self) { self.toggle(other); }
}
impl ::bitflags::__private::core::ops::BitAnd for EventFilter {
type Output = Self;
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand(self, other: Self) -> Self { self.intersection(other) }
}
impl ::bitflags::__private::core::ops::BitAndAssign for EventFilter {
/// The bitwise and (`&`) of the bits in two flags values.
#[inline]
fn bitand_assign(&mut self, other: Self) {
*self =
Self::from_bits_retain(self.bits()).intersection(other);
}
}
impl ::bitflags::__private::core::ops::Sub for EventFilter {
type Output = Self;
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub(self, other: Self) -> Self { self.difference(other) }
}
impl ::bitflags::__private::core::ops::SubAssign for EventFilter {
/// The intersection of a source flags value with the complement of a target flags value (`&!`).
///
/// This method is not equivalent to `self & !other` when `other` has unknown bits set.
/// `difference` won't truncate `other`, but the `!` operator will.
#[inline]
fn sub_assign(&mut self, other: Self) { self.remove(other); }
}
impl ::bitflags::__private::core::ops::Not for EventFilter {
type Output = Self;
/// The bitwise negation (`!`) of the bits in a flags value, truncating the result.
#[inline]
fn not(self) -> Self { self.complement() }
}
impl ::bitflags::__private::core::iter::Extend<EventFilter> for
EventFilter {
/// The bitwise or (`|`) of the bits in each flags value.
fn extend<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(&mut self, iterator: T) {
for item in iterator { self.insert(item) }
}
}
impl ::bitflags::__private::core::iter::FromIterator<EventFilter> for
EventFilter {
/// The bitwise or (`|`) of the bits in each flags value.
fn from_iter<T: ::bitflags::__private::core::iter::IntoIterator<Item
= Self>>(iterator: T) -> Self {
use ::bitflags::__private::core::iter::Extend;
let mut result = Self::empty();
result.extend(iterator);
result
}
}
impl EventFilter {
/// Yield a set of contained flags values.
///
/// Each yielded flags value will correspond to a defined named flag. Any unknown bits
/// will be yielded together as a final flags value.
#[inline]
pub const fn iter(&self) -> ::bitflags::iter::Iter<EventFilter> {
::bitflags::iter::Iter::__private_const_new(<EventFilter as
::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
/// Yield a set of contained named flags values.
///
/// This method is like [`iter`](#method.iter), except only yields bits in contained named flags.
/// Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
#[inline]
pub const fn iter_names(&self)
-> ::bitflags::iter::IterNames<EventFilter> {
::bitflags::iter::IterNames::__private_const_new(<EventFilter
as ::bitflags::Flags>::FLAGS,
EventFilter::from_bits_retain(self.bits()),
EventFilter::from_bits_retain(self.bits()))
}
}
impl ::bitflags::__private::core::iter::IntoIterator for EventFilter {
type Item = EventFilter;
type IntoIter = ::bitflags::iter::Iter<EventFilter>;
fn into_iter(self) -> Self::IntoIter { self.iter() }
}
};Clone, #[automatically_derived]
impl ::core::marker::Copy for EventFilter { }Copy)]
107struct EventFilter: u16 {
108const GENERIC_ACTIVITIES = 1 << 0;
109const QUERY_PROVIDERS = 1 << 1;
110/// Store detailed instant events, including timestamp and thread ID,
111 /// per each query cache hit. Note that this is quite expensive.
112const QUERY_CACHE_HITS = 1 << 2;
113const QUERY_BLOCKED = 1 << 3;
114const INCR_CACHE_LOADS = 1 << 4;
115116const QUERY_KEYS = 1 << 5;
117const FUNCTION_ARGS = 1 << 6;
118const LLVM = 1 << 7;
119const INCR_RESULT_HASHING = 1 << 8;
120const ARTIFACT_SIZES = 1 << 9;
121/// Store aggregated counts of cache hits per query invocation.
122const QUERY_CACHE_HIT_COUNTS = 1 << 10;
123124const DEFAULT = Self::GENERIC_ACTIVITIES.bits() |
125Self::QUERY_PROVIDERS.bits() |
126Self::QUERY_BLOCKED.bits() |
127Self::INCR_CACHE_LOADS.bits() |
128Self::INCR_RESULT_HASHING.bits() |
129Self::ARTIFACT_SIZES.bits() |
130Self::QUERY_CACHE_HIT_COUNTS.bits();
131132const ARGS = Self::QUERY_KEYS.bits() | Self::FUNCTION_ARGS.bits();
133const QUERY_CACHE_HIT_COMBINED = Self::QUERY_CACHE_HITS.bits() | Self::QUERY_CACHE_HIT_COUNTS.bits();
134 }
135}
136137// keep this in sync with the `-Z self-profile-events` help message in rustc_session/options.rs
138const EVENT_FILTERS_BY_NAME: &[(&str, EventFilter)] = &[
139 ("none", EventFilter::empty()),
140 ("all", EventFilter::all()),
141 ("default", EventFilter::DEFAULT),
142 ("generic-activity", EventFilter::GENERIC_ACTIVITIES),
143 ("query-provider", EventFilter::QUERY_PROVIDERS),
144 ("query-cache-hit", EventFilter::QUERY_CACHE_HITS),
145 ("query-cache-hit-count", EventFilter::QUERY_CACHE_HIT_COUNTS),
146 ("query-blocked", EventFilter::QUERY_BLOCKED),
147 ("incr-cache-load", EventFilter::INCR_CACHE_LOADS),
148 ("query-keys", EventFilter::QUERY_KEYS),
149 ("function-args", EventFilter::FUNCTION_ARGS),
150 ("args", EventFilter::ARGS),
151 ("llvm", EventFilter::LLVM),
152 ("incr-result-hashing", EventFilter::INCR_RESULT_HASHING),
153 ("artifact-sizes", EventFilter::ARTIFACT_SIZES),
154];
155156/// Something that uniquely identifies a query invocation.
157pub struct QueryInvocationId(pub u32);
158159/// Which format to use for `-Z time-passes`
160#[derive(#[automatically_derived]
impl ::core::clone::Clone for TimePassesFormat {
#[inline]
fn clone(&self) -> TimePassesFormat { *self }
}Clone, #[automatically_derived]
impl ::core::marker::Copy for TimePassesFormat { }Copy, #[automatically_derived]
impl ::core::cmp::PartialEq for TimePassesFormat {
#[inline]
fn eq(&self, other: &TimePassesFormat) -> bool {
let __self_discr = ::core::intrinsics::discriminant_value(self);
let __arg1_discr = ::core::intrinsics::discriminant_value(other);
__self_discr == __arg1_discr
}
}PartialEq, #[automatically_derived]
impl ::core::hash::Hash for TimePassesFormat {
#[inline]
fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) -> () {
let __self_discr = ::core::intrinsics::discriminant_value(self);
::core::hash::Hash::hash(&__self_discr, state)
}
}Hash, #[automatically_derived]
impl ::core::fmt::Debug for TimePassesFormat {
#[inline]
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
::core::fmt::Formatter::write_str(f,
match self {
TimePassesFormat::Text => "Text",
TimePassesFormat::Json => "Json",
})
}
}Debug)]
161pub enum TimePassesFormat {
162/// Emit human readable text
163Text,
164/// Emit structured JSON
165Json,
166}
167168/// A reference to the SelfProfiler. It can be cloned and sent across thread
169/// boundaries at will.
170#[derive(#[automatically_derived]
impl ::core::clone::Clone for SelfProfilerRef {
#[inline]
fn clone(&self) -> SelfProfilerRef {
SelfProfilerRef {
profiler: ::core::clone::Clone::clone(&self.profiler),
event_filter_mask: ::core::clone::Clone::clone(&self.event_filter_mask),
print_verbose_generic_activities: ::core::clone::Clone::clone(&self.print_verbose_generic_activities),
}
}
}Clone)]
171pub struct SelfProfilerRef {
172// This field is `None` if self-profiling is disabled for the current
173 // compilation session.
174profiler: Option<Arc<SelfProfiler>>,
175176// We store the filter mask directly in the reference because that doesn't
177 // cost anything and allows for filtering with checking if the profiler is
178 // actually enabled.
179event_filter_mask: EventFilter,
180181// Print verbose generic activities to stderr.
182print_verbose_generic_activities: Option<TimePassesFormat>,
183}
184185impl SelfProfilerRef {
186pub fn new(
187 profiler: Option<Arc<SelfProfiler>>,
188 print_verbose_generic_activities: Option<TimePassesFormat>,
189 ) -> SelfProfilerRef {
190// If there is no SelfProfiler then the filter mask is set to NONE,
191 // ensuring that nothing ever tries to actually access it.
192let event_filter_mask =
193profiler.as_ref().map_or(EventFilter::empty(), |p| p.event_filter_mask);
194195SelfProfilerRef { profiler, event_filter_mask, print_verbose_generic_activities }
196 }
197198/// This shim makes sure that calls only get executed if the filter mask
199 /// lets them pass. It also contains some trickery to make sure that
200 /// code is optimized for non-profiling compilation sessions, i.e. anything
201 /// past the filter check is never inlined so it doesn't clutter the fast
202 /// path.
203#[inline(always)]
204fn exec<F>(&self, event_filter: EventFilter, f: F) -> TimingGuard<'_>
205where
206F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
207 {
208#[inline(never)]
209 #[cold]
210fn cold_call<F>(profiler_ref: &SelfProfilerRef, f: F) -> TimingGuard<'_>
211where
212F: for<'a> FnOnce(&'a SelfProfiler) -> TimingGuard<'a>,
213 {
214let profiler = profiler_ref.profiler.as_ref().unwrap();
215f(profiler)
216 }
217218if self.event_filter_mask.contains(event_filter) {
219cold_call(self, f)
220 } else {
221TimingGuard::none()
222 }
223 }
224225/// Start profiling a verbose generic activity. Profiling continues until the
226 /// VerboseTimingGuard returned from this call is dropped. In addition to recording
227 /// a measureme event, "verbose" generic activities also print a timing entry to
228 /// stderr if the compiler is invoked with -Ztime-passes.
229pub fn verbose_generic_activity(&self, event_label: &'static str) -> VerboseTimingGuard<'_> {
230let message_and_format =
231self.print_verbose_generic_activities.map(|format| (event_label.to_owned(), format));
232233VerboseTimingGuard::start(message_and_format, self.generic_activity(event_label))
234 }
235236/// Like `verbose_generic_activity`, but with an extra arg.
237pub fn verbose_generic_activity_with_arg<A>(
238&self,
239 event_label: &'static str,
240 event_arg: A,
241 ) -> VerboseTimingGuard<'_>
242where
243A: Borrow<str> + Into<String>,
244 {
245let message_and_format = self246 .print_verbose_generic_activities
247 .map(|format| (::alloc::__export::must_use({
::alloc::fmt::format(format_args!("{0}({1})", event_label,
event_arg.borrow()))
})format!("{}({})", event_label, event_arg.borrow()), format));
248249VerboseTimingGuard::start(
250message_and_format,
251self.generic_activity_with_arg(event_label, event_arg),
252 )
253 }
254255/// Start profiling a generic activity. Profiling continues until the
256 /// TimingGuard returned from this call is dropped.
257#[inline(always)]
258pub fn generic_activity(&self, event_label: &'static str) -> TimingGuard<'_> {
259self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
260let event_label = profiler.get_or_alloc_cached_string(event_label);
261let event_id = EventId::from_label(event_label);
262TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
263 })
264 }
265266/// Start profiling with some event filter for a given event. Profiling continues until the
267 /// TimingGuard returned from this call is dropped.
268#[inline(always)]
269pub fn generic_activity_with_event_id(&self, event_id: EventId) -> TimingGuard<'_> {
270self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
271TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
272 })
273 }
274275/// Start profiling a generic activity. Profiling continues until the
276 /// TimingGuard returned from this call is dropped.
277#[inline(always)]
278pub fn generic_activity_with_arg<A>(
279&self,
280 event_label: &'static str,
281 event_arg: A,
282 ) -> TimingGuard<'_>
283where
284A: Borrow<str> + Into<String>,
285 {
286self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
287let builder = EventIdBuilder::new(&profiler.profiler);
288let event_label = profiler.get_or_alloc_cached_string(event_label);
289let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
290let event_arg = profiler.get_or_alloc_cached_string(event_arg);
291builder.from_label_and_arg(event_label, event_arg)
292 } else {
293builder.from_label(event_label)
294 };
295TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
296 })
297 }
298299/// Start profiling a generic activity, allowing costly arguments to be recorded. Profiling
300 /// continues until the `TimingGuard` returned from this call is dropped.
301 ///
302 /// If the arguments to a generic activity are cheap to create, use `generic_activity_with_arg`
303 /// or `generic_activity_with_args` for their simpler API. However, if they are costly or
304 /// require allocation in sufficiently hot contexts, then this allows for a closure to be called
305 /// only when arguments were asked to be recorded via `-Z self-profile-events=args`.
306 ///
307 /// In this case, the closure will be passed a `&mut EventArgRecorder`, to help with recording
308 /// one or many arguments within the generic activity being profiled, by calling its
309 /// `record_arg` method for example.
310 ///
311 /// This `EventArgRecorder` may implement more specific traits from other rustc crates, e.g. for
312 /// richer handling of rustc-specific argument types, while keeping this single entry-point API
313 /// for recording arguments.
314 ///
315 /// Note: recording at least one argument is *required* for the self-profiler to create the
316 /// `TimingGuard`. A panic will be triggered if that doesn't happen. This function exists
317 /// explicitly to record arguments, so it fails loudly when there are none to record.
318 ///
319#[inline(always)]
320pub fn generic_activity_with_arg_recorder<F>(
321&self,
322 event_label: &'static str,
323mut f: F,
324 ) -> TimingGuard<'_>
325where
326F: FnMut(&mut EventArgRecorder<'_>),
327 {
328// Ensure this event will only be recorded when self-profiling is turned on.
329self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
330let builder = EventIdBuilder::new(&profiler.profiler);
331let event_label = profiler.get_or_alloc_cached_string(event_label);
332333// Ensure the closure to create event arguments will only be called when argument
334 // recording is turned on.
335let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
336// Set up the builder and call the user-provided closure to record potentially
337 // costly event arguments.
338let mut recorder = EventArgRecorder { profiler, args: SmallVec::new() };
339f(&mut recorder);
340341// It is expected that the closure will record at least one argument. If that
342 // doesn't happen, it's a bug: we've been explicitly called in order to record
343 // arguments, so we fail loudly when there are none to record.
344if recorder.args.is_empty() {
345{
::core::panicking::panic_fmt(format_args!("The closure passed to `generic_activity_with_arg_recorder` needs to record at least one argument"));
};panic!(
346"The closure passed to `generic_activity_with_arg_recorder` needs to \
347 record at least one argument"
348);
349 }
350351builder.from_label_and_args(event_label, &recorder.args)
352 } else {
353builder.from_label(event_label)
354 };
355TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
356 })
357 }
358359/// Record the size of an artifact that the compiler produces
360 ///
361 /// `artifact_kind` is the class of artifact (e.g., query_cache, object_file, etc.)
362 /// `artifact_name` is an identifier to the specific artifact being stored (usually a filename)
363#[inline(always)]
364pub fn artifact_size<A>(&self, artifact_kind: &str, artifact_name: A, size: u64)
365where
366A: Borrow<str> + Into<String>,
367 {
368drop(self.exec(EventFilter::ARTIFACT_SIZES, |profiler| {
369let builder = EventIdBuilder::new(&profiler.profiler);
370let event_label = profiler.get_or_alloc_cached_string(artifact_kind);
371let event_arg = profiler.get_or_alloc_cached_string(artifact_name);
372let event_id = builder.from_label_and_arg(event_label, event_arg);
373let thread_id = get_thread_id();
374375profiler.profiler.record_integer_event(
376profiler.artifact_size_event_kind,
377event_id,
378thread_id,
379size,
380 );
381382TimingGuard::none()
383 }))
384 }
385386#[inline(always)]
387pub fn generic_activity_with_args(
388&self,
389 event_label: &'static str,
390 event_args: &[String],
391 ) -> TimingGuard<'_> {
392self.exec(EventFilter::GENERIC_ACTIVITIES, |profiler| {
393let builder = EventIdBuilder::new(&profiler.profiler);
394let event_label = profiler.get_or_alloc_cached_string(event_label);
395let event_id = if profiler.event_filter_mask.contains(EventFilter::FUNCTION_ARGS) {
396let event_args: Vec<_> = event_args397 .iter()
398 .map(|s| profiler.get_or_alloc_cached_string(&s[..]))
399 .collect();
400builder.from_label_and_args(event_label, &event_args)
401 } else {
402builder.from_label(event_label)
403 };
404TimingGuard::start(profiler, profiler.generic_activity_event_kind, event_id)
405 })
406 }
407408/// Start profiling a query provider. Profiling continues until the
409 /// TimingGuard returned from this call is dropped.
410#[inline(always)]
411pub fn query_provider(&self) -> TimingGuard<'_> {
412self.exec(EventFilter::QUERY_PROVIDERS, |profiler| {
413TimingGuard::start(profiler, profiler.query_event_kind, EventId::INVALID)
414 })
415 }
416417/// Record a query in-memory cache hit.
418#[inline(always)]
419pub fn query_cache_hit(&self, query_invocation_id: QueryInvocationId) {
420#[inline(never)]
421 #[cold]
422fn cold_call(profiler_ref: &SelfProfilerRef, query_invocation_id: QueryInvocationId) {
423if profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
424profiler_ref425 .profiler
426 .as_ref()
427 .unwrap()
428 .increment_query_cache_hit_counters(QueryInvocationId(query_invocation_id.0));
429 }
430if unlikely(profiler_ref.event_filter_mask.contains(EventFilter::QUERY_CACHE_HITS)) {
431profiler_ref.instant_query_event(
432 |profiler| profiler.query_cache_hit_event_kind,
433query_invocation_id,
434 );
435 }
436 }
437438// We check both kinds of query cache hit events at once, to reduce overhead in the
439 // common case (with self-profile disabled).
440if unlikely(self.event_filter_mask.intersects(EventFilter::QUERY_CACHE_HIT_COMBINED)) {
441cold_call(self, query_invocation_id);
442 }
443 }
444445/// Start profiling a query being blocked on a concurrent execution.
446 /// Profiling continues until the TimingGuard returned from this call is
447 /// dropped.
448#[inline(always)]
449pub fn query_blocked(&self) -> TimingGuard<'_> {
450self.exec(EventFilter::QUERY_BLOCKED, |profiler| {
451TimingGuard::start(profiler, profiler.query_blocked_event_kind, EventId::INVALID)
452 })
453 }
454455/// Start profiling how long it takes to load a query result from the
456 /// incremental compilation on-disk cache. Profiling continues until the
457 /// TimingGuard returned from this call is dropped.
458#[inline(always)]
459pub fn incr_cache_loading(&self) -> TimingGuard<'_> {
460self.exec(EventFilter::INCR_CACHE_LOADS, |profiler| {
461TimingGuard::start(
462profiler,
463profiler.incremental_load_result_event_kind,
464EventId::INVALID,
465 )
466 })
467 }
468469/// Start profiling how long it takes to hash query results for incremental compilation.
470 /// Profiling continues until the TimingGuard returned from this call is dropped.
471#[inline(always)]
472pub fn incr_result_hashing(&self) -> TimingGuard<'_> {
473self.exec(EventFilter::INCR_RESULT_HASHING, |profiler| {
474TimingGuard::start(
475profiler,
476profiler.incremental_result_hashing_event_kind,
477EventId::INVALID,
478 )
479 })
480 }
481482#[inline(always)]
483fn instant_query_event(
484&self,
485 event_kind: fn(&SelfProfiler) -> StringId,
486 query_invocation_id: QueryInvocationId,
487 ) {
488let event_id = StringId::new_virtual(query_invocation_id.0);
489let thread_id = get_thread_id();
490let profiler = self.profiler.as_ref().unwrap();
491profiler.profiler.record_instant_event(
492event_kind(profiler),
493EventId::from_virtual(event_id),
494thread_id,
495 );
496 }
497498pub fn with_profiler(&self, f: impl FnOnce(&SelfProfiler)) {
499if let Some(profiler) = &self.profiler {
500f(profiler)
501 }
502 }
503504/// Gets a `StringId` for the given string. This method makes sure that
505 /// any strings going through it will only be allocated once in the
506 /// profiling data.
507 /// Returns `None` if the self-profiling is not enabled.
508pub fn get_or_alloc_cached_string(&self, s: &str) -> Option<StringId> {
509self.profiler.as_ref().map(|p| p.get_or_alloc_cached_string(s))
510 }
511512/// Store query cache hits to the self-profile log.
513 /// Should be called once at the end of the compilation session.
514 ///
515 /// The cache hits are stored per **query invocation**, not **per query kind/type**.
516 /// `analyzeme` can later deduplicate individual query labels from the QueryInvocationId event
517 /// IDs.
518pub fn store_query_cache_hits(&self) {
519if self.event_filter_mask.contains(EventFilter::QUERY_CACHE_HIT_COUNTS) {
520let profiler = self.profiler.as_ref().unwrap();
521let query_hits = profiler.query_hits.read();
522let builder = EventIdBuilder::new(&profiler.profiler);
523let thread_id = get_thread_id();
524for (query_invocation, hit_count) in query_hits.iter().enumerate() {
525let hit_count = hit_count.load(Ordering::Relaxed);
526// No need to record empty cache hit counts
527if hit_count > 0 {
528let event_id =
529 builder.from_label(StringId::new_virtual(query_invocation as u64));
530 profiler.profiler.record_integer_event(
531 profiler.query_cache_hit_count_event_kind,
532 event_id,
533 thread_id,
534 hit_count,
535 );
536 }
537 }
538 }
539 }
540541#[inline]
542pub fn enabled(&self) -> bool {
543self.profiler.is_some()
544 }
545546#[inline]
547pub fn llvm_recording_enabled(&self) -> bool {
548self.event_filter_mask.contains(EventFilter::LLVM)
549 }
550#[inline]
551pub fn get_self_profiler(&self) -> Option<Arc<SelfProfiler>> {
552self.profiler.clone()
553 }
554555/// Is expensive recording of query keys and/or function arguments enabled?
556pub fn is_args_recording_enabled(&self) -> bool {
557self.enabled() && self.event_filter_mask.intersects(EventFilter::ARGS)
558 }
559}
560561/// A helper for recording costly arguments to self-profiling events. Used with
562/// `SelfProfilerRef::generic_activity_with_arg_recorder`.
563pub struct EventArgRecorder<'p> {
564/// The `SelfProfiler` used to intern the event arguments that users will ask to record.
565profiler: &'p SelfProfiler,
566567/// The interned event arguments to be recorded in the generic activity event.
568 ///
569 /// The most common case, when actually recording event arguments, is to have one argument. Then
570 /// followed by recording two, in a couple places.
571args: SmallVec<[StringId; 2]>,
572}
573574impl EventArgRecorder<'_> {
575/// Records a single argument within the current generic activity being profiled.
576 ///
577 /// Note: when self-profiling with costly event arguments, at least one argument
578 /// needs to be recorded. A panic will be triggered if that doesn't happen.
579pub fn record_arg<A>(&mut self, event_arg: A)
580where
581A: Borrow<str> + Into<String>,
582 {
583let event_arg = self.profiler.get_or_alloc_cached_string(event_arg);
584self.args.push(event_arg);
585 }
586}
587588pub struct SelfProfiler {
589 profiler: Profiler,
590 event_filter_mask: EventFilter,
591592 string_cache: RwLock<FxHashMap<String, StringId>>,
593594/// Recording individual query cache hits as "instant" measureme events
595 /// is incredibly expensive. Instead of doing that, we simply aggregate
596 /// cache hit *counts* per query invocation, and then store the final count
597 /// of cache hits per invocation at the end of the compilation session.
598 ///
599 /// With this approach, we don't know the individual thread IDs and timestamps
600 /// of cache hits, but it has very little overhead on top of `-Zself-profile`.
601 /// Recording the cache hits as individual events made compilation 3-5x slower.
602 ///
603 /// Query invocation IDs should be monotonic integers, so we can store them in a vec,
604 /// rather than using a hashmap.
605query_hits: RwLock<Vec<AtomicU64>>,
606607 query_event_kind: StringId,
608 generic_activity_event_kind: StringId,
609 incremental_load_result_event_kind: StringId,
610 incremental_result_hashing_event_kind: StringId,
611 query_blocked_event_kind: StringId,
612 query_cache_hit_event_kind: StringId,
613 artifact_size_event_kind: StringId,
614/// Total cache hits per query invocation
615query_cache_hit_count_event_kind: StringId,
616}
617618impl SelfProfiler {
619pub fn new(
620 output_directory: &Path,
621 crate_name: Option<&str>,
622 event_filters: Option<&[String]>,
623 counter_name: &str,
624 ) -> Result<SelfProfiler, Box<dyn Error + Send + Sync>> {
625 fs::create_dir_all(output_directory)?;
626627let crate_name = crate_name.unwrap_or("unknown-crate");
628// HACK(eddyb) we need to pad the PID, strange as it may seem, as its
629 // length can behave as a source of entropy for heap addresses, when
630 // ASLR is disabled and the heap is otherwise deterministic.
631let pid: u32 = process::id();
632let filename = ::alloc::__export::must_use({
::alloc::fmt::format(format_args!("{0}-{1:07}.rustc_profile",
crate_name, pid))
})format!("{crate_name}-{pid:07}.rustc_profile");
633let path = output_directory.join(filename);
634let profiler =
635Profiler::with_counter(&path, measureme::counters::Counter::by_name(counter_name)?)?;
636637let query_event_kind = profiler.alloc_string("Query");
638let generic_activity_event_kind = profiler.alloc_string("GenericActivity");
639let incremental_load_result_event_kind = profiler.alloc_string("IncrementalLoadResult");
640let incremental_result_hashing_event_kind =
641profiler.alloc_string("IncrementalResultHashing");
642let query_blocked_event_kind = profiler.alloc_string("QueryBlocked");
643let query_cache_hit_event_kind = profiler.alloc_string("QueryCacheHit");
644let artifact_size_event_kind = profiler.alloc_string("ArtifactSize");
645let query_cache_hit_count_event_kind = profiler.alloc_string("QueryCacheHitCount");
646647let mut event_filter_mask = EventFilter::empty();
648649if let Some(event_filters) = event_filters {
650let mut unknown_events = ::alloc::vec::Vec::new()vec![];
651for item in event_filters {
652if let Some(&(_, mask)) =
653 EVENT_FILTERS_BY_NAME.iter().find(|&(name, _)| name == item)
654 {
655 event_filter_mask |= mask;
656 } else {
657 unknown_events.push(item.clone());
658 }
659 }
660661// Warn about any unknown event names
662if !unknown_events.is_empty() {
663unknown_events.sort();
664unknown_events.dedup();
665666{
use ::tracing::__macro_support::Callsite as _;
static __CALLSITE: ::tracing::callsite::DefaultCallsite =
{
static META: ::tracing::Metadata<'static> =
{
::tracing_core::metadata::Metadata::new("event compiler/rustc_data_structures/src/profiling.rs:666",
"rustc_data_structures::profiling", ::tracing::Level::WARN,
::tracing_core::__macro_support::Option::Some("compiler/rustc_data_structures/src/profiling.rs"),
::tracing_core::__macro_support::Option::Some(666u32),
::tracing_core::__macro_support::Option::Some("rustc_data_structures::profiling"),
::tracing_core::field::FieldSet::new(&["message"],
::tracing_core::callsite::Identifier(&__CALLSITE)),
::tracing::metadata::Kind::EVENT)
};
::tracing::callsite::DefaultCallsite::new(&META)
};
let enabled =
::tracing::Level::WARN <= ::tracing::level_filters::STATIC_MAX_LEVEL
&&
::tracing::Level::WARN <=
::tracing::level_filters::LevelFilter::current() &&
{
let interest = __CALLSITE.interest();
!interest.is_never() &&
::tracing::__macro_support::__is_enabled(__CALLSITE.metadata(),
interest)
};
if enabled {
(|value_set: ::tracing::field::ValueSet|
{
let meta = __CALLSITE.metadata();
::tracing::Event::dispatch(meta, &value_set);
;
})({
#[allow(unused_imports)]
use ::tracing::field::{debug, display, Value};
let mut iter = __CALLSITE.metadata().fields().iter();
__CALLSITE.metadata().fields().value_set(&[(&::tracing::__macro_support::Iterator::next(&mut iter).expect("FieldSet corrupted (this is a bug)"),
::tracing::__macro_support::Option::Some(&format_args!("Unknown self-profiler events specified: {0}. Available options are: {1}.",
unknown_events.join(", "),
EVENT_FILTERS_BY_NAME.iter().map(|&(name, _)|
name.to_string()).collect::<Vec<_>>().join(", ")) as
&dyn Value))])
});
} else { ; }
};warn!(
667"Unknown self-profiler events specified: {}. Available options are: {}.",
668 unknown_events.join(", "),
669 EVENT_FILTERS_BY_NAME
670 .iter()
671 .map(|&(name, _)| name.to_string())
672 .collect::<Vec<_>>()
673 .join(", ")
674 );
675 }
676 } else {
677event_filter_mask = EventFilter::DEFAULT;
678 }
679680Ok(SelfProfiler {
681profiler,
682event_filter_mask,
683 string_cache: RwLock::new(FxHashMap::default()),
684query_event_kind,
685generic_activity_event_kind,
686incremental_load_result_event_kind,
687incremental_result_hashing_event_kind,
688query_blocked_event_kind,
689query_cache_hit_event_kind,
690artifact_size_event_kind,
691query_cache_hit_count_event_kind,
692 query_hits: Default::default(),
693 })
694 }
695696/// Allocates a new string in the profiling data. Does not do any caching
697 /// or deduplication.
698pub fn alloc_string<STR: SerializableString + ?Sized>(&self, s: &STR) -> StringId {
699self.profiler.alloc_string(s)
700 }
701702/// Store a cache hit of a query invocation
703pub fn increment_query_cache_hit_counters(&self, id: QueryInvocationId) {
704// Fast path: assume that the query was already encountered before, and just record
705 // a cache hit.
706let mut guard = self.query_hits.upgradable_read();
707let query_hits = &guard;
708let index = id.0 as usize;
709if index < query_hits.len() {
710// We only want to increment the count, no other synchronization is required
711query_hits[index].fetch_add(1, Ordering::Relaxed);
712 } else {
713// If not, we need to extend the query hit map to the highest observed ID
714guard.with_upgraded(|vec| {
715vec.resize_with(index + 1, || AtomicU64::new(0));
716vec[index] = AtomicU64::from(1);
717 });
718 }
719 }
720721/// Gets a `StringId` for the given string. This method makes sure that
722 /// any strings going through it will only be allocated once in the
723 /// profiling data.
724pub fn get_or_alloc_cached_string<A>(&self, s: A) -> StringId725where
726A: Borrow<str> + Into<String>,
727 {
728// Only acquire a read-lock first since we assume that the string is
729 // already present in the common case.
730{
731let string_cache = self.string_cache.read();
732733if let Some(&id) = string_cache.get(s.borrow()) {
734return id;
735 }
736 }
737738let mut string_cache = self.string_cache.write();
739// Check if the string has already been added in the small time window
740 // between dropping the read lock and acquiring the write lock.
741match string_cache.entry(s.into()) {
742 Entry::Occupied(e) => *e.get(),
743 Entry::Vacant(e) => {
744let string_id = self.profiler.alloc_string(&e.key()[..]);
745*e.insert(string_id)
746 }
747 }
748 }
749750pub fn map_query_invocation_id_to_string(&self, from: QueryInvocationId, to: StringId) {
751let from = StringId::new_virtual(from.0);
752self.profiler.map_virtual_to_concrete_string(from, to);
753 }
754755pub fn bulk_map_query_invocation_id_to_single_string<I>(&self, from: I, to: StringId)
756where
757I: Iterator<Item = QueryInvocationId> + ExactSizeIterator,
758 {
759let from = from.map(|qid| StringId::new_virtual(qid.0));
760self.profiler.bulk_map_virtual_to_single_concrete_string(from, to);
761 }
762763pub fn query_key_recording_enabled(&self) -> bool {
764self.event_filter_mask.contains(EventFilter::QUERY_KEYS)
765 }
766767pub fn event_id_builder(&self) -> EventIdBuilder<'_> {
768EventIdBuilder::new(&self.profiler)
769 }
770}
771772#[must_use]
773pub struct TimingGuard<'a>(Option<measureme::TimingGuard<'a>>);
774775impl<'a> TimingGuard<'a> {
776#[inline]
777pub fn start(
778 profiler: &'a SelfProfiler,
779 event_kind: StringId,
780 event_id: EventId,
781 ) -> TimingGuard<'a> {
782let thread_id = get_thread_id();
783let raw_profiler = &profiler.profiler;
784let timing_guard =
785raw_profiler.start_recording_interval_event(event_kind, event_id, thread_id);
786TimingGuard(Some(timing_guard))
787 }
788789#[inline]
790pub fn finish_with_query_invocation_id(self, query_invocation_id: QueryInvocationId) {
791if let Some(guard) = self.0 {
792outline(|| {
793let event_id = StringId::new_virtual(query_invocation_id.0);
794let event_id = EventId::from_virtual(event_id);
795guard.finish_with_override_event_id(event_id);
796 });
797 }
798 }
799800#[inline]
801pub fn none() -> TimingGuard<'a> {
802TimingGuard(None)
803 }
804805#[inline(always)]
806pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
807let _timer = self;
808f()
809 }
810}
811812struct VerboseInfo {
813 start_time: Instant,
814 start_rss: Option<usize>,
815 message: String,
816 format: TimePassesFormat,
817}
818819#[must_use]
820pub struct VerboseTimingGuard<'a> {
821 info: Option<VerboseInfo>,
822 _guard: TimingGuard<'a>,
823}
824825impl<'a> VerboseTimingGuard<'a> {
826pub fn start(
827 message_and_format: Option<(String, TimePassesFormat)>,
828 _guard: TimingGuard<'a>,
829 ) -> Self {
830VerboseTimingGuard {
831_guard,
832 info: message_and_format.map(|(message, format)| VerboseInfo {
833 start_time: Instant::now(),
834 start_rss: get_resident_set_size(),
835message,
836format,
837 }),
838 }
839 }
840841#[inline(always)]
842pub fn run<R>(self, f: impl FnOnce() -> R) -> R {
843let _timer = self;
844f()
845 }
846}
847848impl Dropfor VerboseTimingGuard<'_> {
849fn drop(&mut self) {
850if let Some(info) = &self.info {
851let end_rss = get_resident_set_size();
852let dur = info.start_time.elapsed();
853print_time_passes_entry(&info.message, dur, info.start_rss, end_rss, info.format);
854 }
855 }
856}
857858struct JsonTimePassesEntry<'a> {
859 pass: &'a str,
860 time: f64,
861 start_rss: Option<usize>,
862 end_rss: Option<usize>,
863}
864865impl Displayfor JsonTimePassesEntry<'_> {
866fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
867let Self { pass: what, time, start_rss, end_rss } = self;
868f.write_fmt(format_args!("{{\"pass\":\"{0}\",\"time\":{1},\"rss_start\":",
what, time))write!(f, r#"{{"pass":"{what}","time":{time},"rss_start":"#).unwrap();
869match start_rss {
870Some(rss) => f.write_fmt(format_args!("{0}", rss))write!(f, "{rss}")?,
871None => f.write_fmt(format_args!("null"))write!(f, "null")?,
872 }
873f.write_fmt(format_args!(",\"rss_end\":"))write!(f, r#","rss_end":"#)?;
874match end_rss {
875Some(rss) => f.write_fmt(format_args!("{0}", rss))write!(f, "{rss}")?,
876None => f.write_fmt(format_args!("null"))write!(f, "null")?,
877 }
878f.write_fmt(format_args!("}}"))write!(f, "}}")?;
879Ok(())
880 }
881}
882883pub fn print_time_passes_entry(
884 what: &str,
885 dur: Duration,
886 start_rss: Option<usize>,
887 end_rss: Option<usize>,
888 format: TimePassesFormat,
889) {
890match format {
891 TimePassesFormat::Json => {
892let entry =
893JsonTimePassesEntry { pass: what, time: dur.as_secs_f64(), start_rss, end_rss };
894895{ ::std::io::_eprint(format_args!("time: {0}\n", entry)); };eprintln!(r#"time: {entry}"#);
896return;
897 }
898 TimePassesFormat::Text => (),
899 }
900901// Print the pass if its duration is greater than 5 ms, or it changed the
902 // measured RSS.
903let is_notable = || {
904if dur.as_millis() > 5 {
905return true;
906 }
907908if let (Some(start_rss), Some(end_rss)) = (start_rss, end_rss) {
909let change_rss = end_rss.abs_diff(start_rss);
910if change_rss > 0 {
911return true;
912 }
913 }
914915false
916};
917if !is_notable() {
918return;
919 }
920921let rss_to_mb = |rss| (rssas f64 / 1_000_000.0).round() as usize;
922let rss_change_to_mb = |rss| (rssas f64 / 1_000_000.0).round() as i128;
923924let mem_string = match (start_rss, end_rss) {
925 (Some(start_rss), Some(end_rss)) => {
926let change_rss = end_rssas i128 - start_rssas i128;
927928::alloc::__export::must_use({
::alloc::fmt::format(format_args!("; rss: {0:>4}MB -> {1:>4}MB ({2:>+5}MB)",
rss_to_mb(start_rss), rss_to_mb(end_rss),
rss_change_to_mb(change_rss)))
})format!(
929"; rss: {:>4}MB -> {:>4}MB ({:>+5}MB)",
930 rss_to_mb(start_rss),
931 rss_to_mb(end_rss),
932 rss_change_to_mb(change_rss),
933 )934 }
935 (Some(start_rss), None) => ::alloc::__export::must_use({
::alloc::fmt::format(format_args!("; rss start: {0:>4}MB",
rss_to_mb(start_rss)))
})format!("; rss start: {:>4}MB", rss_to_mb(start_rss)),
936 (None, Some(end_rss)) => ::alloc::__export::must_use({
::alloc::fmt::format(format_args!("; rss end: {0:>4}MB",
rss_to_mb(end_rss)))
})format!("; rss end: {:>4}MB", rss_to_mb(end_rss)),
937 (None, None) => String::new(),
938 };
939940{
::std::io::_eprint(format_args!("time: {0:>7}{1}\t{2}\n",
duration_to_secs_str(dur), mem_string, what));
};eprintln!("time: {:>7}{}\t{}", duration_to_secs_str(dur), mem_string, what);
941}
942943// Hack up our own formatting for the duration to make it easier for scripts
944// to parse (always use the same number of decimal places and the same unit).
945pub fn duration_to_secs_str(dur: std::time::Duration) -> String {
946::alloc::__export::must_use({
::alloc::fmt::format(format_args!("{0:.3}", dur.as_secs_f64()))
})format!("{:.3}", dur.as_secs_f64())947}
948949fn get_thread_id() -> u32 {
950 std::thread::current().id().as_u64().get() as u32951}
952953// Memory reporting
954cfg_select! {
955 windows => {
956pub fn get_resident_set_size() -> Option<usize> {
957use windows::{
958 Win32::System::ProcessStatus::{K32GetProcessMemoryInfo, PROCESS_MEMORY_COUNTERS},
959 Win32::System::Threading::GetCurrentProcess,
960 };
961962let mut pmc = PROCESS_MEMORY_COUNTERS::default();
963let pmc_size = size_of_val(&pmc);
964unsafe {
965 K32GetProcessMemoryInfo(
966 GetCurrentProcess(),
967&mut pmc,
968 pmc_size as u32,
969 )
970 }
971 .ok()
972 .ok()?;
973974Some(pmc.WorkingSetSize)
975 }
976 }
977 target_os = "macos" => {
978pub fn get_resident_set_size() -> Option<usize> {
979use libc::{c_int, c_void, getpid, proc_pidinfo, proc_taskinfo, PROC_PIDTASKINFO};
980use std::mem;
981const PROC_TASKINFO_SIZE: c_int = size_of::<proc_taskinfo>() as c_int;
982983unsafe {
984let mut info: proc_taskinfo = mem::zeroed();
985let info_ptr = &mut info as *mut proc_taskinfo as *mut c_void;
986let pid = getpid() as c_int;
987let ret = proc_pidinfo(pid, PROC_PIDTASKINFO, 0, info_ptr, PROC_TASKINFO_SIZE);
988if ret == PROC_TASKINFO_SIZE {
989Some(info.pti_resident_size as usize)
990 } else {
991None
992}
993 }
994 }
995 }
996 unix => {
997pub fn get_resident_set_size() -> Option<usize> {
998let field = 1;
999let contents = fs::read("/proc/self/statm").ok()?;
1000let contents = String::from_utf8(contents).ok()?;
1001let s = contents.split_whitespace().nth(field)?;
1002let npages = s.parse::<usize>().ok()?;
1003Some(npages * 4096)
1004 }
1005 }
1006_ => {
1007pub fn get_resident_set_size() -> Option<usize> {
1008None
1009}
1010 }
1011}
10121013#[cfg(test)]
1014mod tests;