Struct miri::concurrency::data_race::MemoryCellClocks
source · struct MemoryCellClocks {
write: (VectorIdx, VTimestamp),
write_type: NaWriteType,
read: VClock,
atomic_ops: Option<Box<AtomicMemoryCellClocks>>,
}
Expand description
Memory Cell vector clock metadata for data-race detection.
Fields§
§write: (VectorIdx, VTimestamp)
The vector-clock timestamp and the thread that did the last non-atomic write. We don’t need
a full VClock
here, it’s always a single thread and nothing synchronizes, so the effective
clock is all-0 except for the thread that did the write.
write_type: NaWriteType
The type of operation that the write index represents, either newly allocated memory, a non-atomic write or a deallocation of memory.
read: VClock
The vector-clock of all non-atomic reads that happened since the last non-atomic write (i.e., we join together the “singleton” clocks corresponding to each read). It is reset to zero on each write operation.
atomic_ops: Option<Box<AtomicMemoryCellClocks>>
Atomic access, acquire, release sequence tracking clocks. For non-atomic memory in the common case this value is set to None.
Implementations§
source§impl MemoryCellClocks
impl MemoryCellClocks
sourcefn new(alloc: VTimestamp, alloc_index: VectorIdx) -> Self
fn new(alloc: VTimestamp, alloc_index: VectorIdx) -> Self
Create a new set of clocks representing memory allocated at a given vector timestamp and index.
fn write_was_before(&self, other: &VClock) -> bool
fn write(&self) -> VClock
sourcefn atomic(&self) -> Option<&AtomicMemoryCellClocks>
fn atomic(&self) -> Option<&AtomicMemoryCellClocks>
Load the internal atomic memory cells if they exist.
sourcefn atomic_mut_unwrap(&mut self) -> &mut AtomicMemoryCellClocks
fn atomic_mut_unwrap(&mut self) -> &mut AtomicMemoryCellClocks
Load the internal atomic memory cells if they exist.
sourcefn atomic_access(
&mut self,
thread_clocks: &ThreadClockSet,
size: Size,
) -> Result<&mut AtomicMemoryCellClocks, DataRace>
fn atomic_access( &mut self, thread_clocks: &ThreadClockSet, size: Size, ) -> Result<&mut AtomicMemoryCellClocks, DataRace>
Load or create the internal atomic memory metadata if it does not exist. Also ensures we do not do mixed-size atomic accesses, and updates the recorded atomic access size.
sourcefn load_acquire(
&mut self,
thread_clocks: &mut ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn load_acquire( &mut self, thread_clocks: &mut ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Update memory cell data-race tracking for atomic load acquire semantics, is a no-op if this memory was not used previously as atomic memory.
sourcefn load_relaxed(
&mut self,
thread_clocks: &mut ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn load_relaxed( &mut self, thread_clocks: &mut ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Update memory cell data-race tracking for atomic load relaxed semantics, is a no-op if this memory was not used previously as atomic memory.
sourcefn store_release(
&mut self,
thread_clocks: &ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn store_release( &mut self, thread_clocks: &ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Update the memory cell data-race tracking for atomic store release semantics.
sourcefn store_relaxed(
&mut self,
thread_clocks: &ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn store_relaxed( &mut self, thread_clocks: &ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Update the memory cell data-race tracking for atomic store relaxed semantics.
sourcefn rmw_release(
&mut self,
thread_clocks: &ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn rmw_release( &mut self, thread_clocks: &ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Update the memory cell data-race tracking for atomic store release semantics for RMW operations.
sourcefn rmw_relaxed(
&mut self,
thread_clocks: &ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn rmw_relaxed( &mut self, thread_clocks: &ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Update the memory cell data-race tracking for atomic store relaxed semantics for RMW operations.
sourcefn atomic_read_detect(
&mut self,
thread_clocks: &ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn atomic_read_detect( &mut self, thread_clocks: &ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Detect data-races with an atomic read, caused by a non-atomic access that does not happen-before the atomic-read.
sourcefn atomic_write_detect(
&mut self,
thread_clocks: &ThreadClockSet,
index: VectorIdx,
access_size: Size,
) -> Result<(), DataRace>
fn atomic_write_detect( &mut self, thread_clocks: &ThreadClockSet, index: VectorIdx, access_size: Size, ) -> Result<(), DataRace>
Detect data-races with an atomic write, either with a non-atomic read or with a non-atomic write.
sourcefn read_race_detect(
&mut self,
thread_clocks: &mut ThreadClockSet,
index: VectorIdx,
read_type: NaReadType,
current_span: Span,
) -> Result<(), DataRace>
fn read_race_detect( &mut self, thread_clocks: &mut ThreadClockSet, index: VectorIdx, read_type: NaReadType, current_span: Span, ) -> Result<(), DataRace>
Detect races for non-atomic read operations at the current memory cell returns true if a data-race is detected.
sourcefn write_race_detect(
&mut self,
thread_clocks: &mut ThreadClockSet,
index: VectorIdx,
write_type: NaWriteType,
current_span: Span,
) -> Result<(), DataRace>
fn write_race_detect( &mut self, thread_clocks: &mut ThreadClockSet, index: VectorIdx, write_type: NaWriteType, current_span: Span, ) -> Result<(), DataRace>
Detect races for non-atomic write operations at the current memory cell returns true if a data-race is detected.
Trait Implementations§
source§impl Clone for MemoryCellClocks
impl Clone for MemoryCellClocks
source§fn clone(&self) -> MemoryCellClocks
fn clone(&self) -> MemoryCellClocks
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for MemoryCellClocks
impl Debug for MemoryCellClocks
source§impl PartialEq for MemoryCellClocks
impl PartialEq for MemoryCellClocks
impl Eq for MemoryCellClocks
impl StructuralPartialEq for MemoryCellClocks
Auto Trait Implementations§
impl Freeze for MemoryCellClocks
impl RefUnwindSafe for MemoryCellClocks
impl !Send for MemoryCellClocks
impl !Sync for MemoryCellClocks
impl Unpin for MemoryCellClocks
impl UnwindSafe for MemoryCellClocks
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)Layout§
Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...)
attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.
Size: 96 bytes