Expand description
Platformspecific intrinsics for the wasm32
platform.
This module provides intrinsics specific to the WebAssembly
architecture. Here you’ll find intrinsics specific to WebAssembly that
aren’t otherwise surfaced somewhere in a crossplatform abstraction of
std
, and you’ll also find functions for leveraging WebAssembly
proposals such as atomics and simd.
Intrinsics in the wasm32
module are modeled after the WebAssembly
instructions that they represent. Most functions are named after the
instruction they intend to correspond to, and the arguments/results
correspond to the type signature of the instruction itself. Stable
WebAssembly instructions are documented online.
If a proposal is not yet stable in WebAssembly itself then the functions within this function may be unstable and require the nightly channel of Rust to use. As the proposal itself stabilizes the intrinsics in this module should stabilize as well.
See the module documentation for general information
about the arch
module and platform intrinsics.
Atomics
The threads proposal for WebAssembly adds a number of
instructions for dealing with multithreaded programs. Most instructions
added in the atomics proposal are exposed in Rust through the
std::sync::atomic
module. Some instructions, however, don’t have
direct equivalents in Rust so they’re exposed here instead.
Note that the instructions added in the atomics proposal can work in
either a context with a shared wasm memory and without. These intrinsics
are always available in the standard library, but you likely won’t be
able to use them too productively unless you recompile the standard
library (and all your code) with Ctargetfeature=+atomics
.
It’s also worth pointing out that multithreaded WebAssembly and its
story in Rust is still in a somewhat “early days” phase as of the time
of this writing. Pieces should mostly work but it generally requires a
good deal of manual setup. At this time it’s not as simple as “just call
std::thread::spawn
”, but it will hopefully get there one day!
SIMD
The simd proposal for WebAssembly added a new v128
type for a
128bit SIMD register. It also added a large array of instructions to
operate on the v128
type to perform data processing. Using SIMD on
wasm is intended to be similar to as you would on x86_64
, for example.
You’d write a function such as:
#[cfg(target_arch = "wasm32")]
#[target_feature(enable = "simd128")]
unsafe fn uses_simd() {
use std::arch::wasm32::*;
// ...
}
RunUnlike x86_64
, however, WebAssembly does not currently have dynamic
detection at runtime as to whether SIMD is supported (this is one of the
motivators for the conditional sections and feature
detection proposals, but that is still pretty early days). This means
that your binary will either have SIMD and can only run on engines
which support SIMD, or it will not have SIMD at all. For compatibility
the standard library itself does not use any SIMD internally.
Determining how best to ship your WebAssembly binary with SIMD is
largely left up to you as it can be pretty nuanced depending on
your situation.
To enable SIMD support at compile time you need to do one of two things:

First you can annotate functions with
#[target_feature(enable = "simd128")]
. This causes just that one function to have SIMD support available to it, and intrinsics will get inlined as usual in this situation. 
Second you can compile your program with
Ctargetfeature=+simd128
. This compilation flag blanket enables SIMD support for your entire compilation. Note that this does not include the standard library unless you recompile the standard library.
If you enable SIMD via either of these routes then you’ll have a WebAssembly binary that uses SIMD instructions, and you’ll need to ship that accordingly. Also note that if you call SIMD intrinsics but don’t enable SIMD via either of these mechanisms, you’ll still have SIMD generated in your program. This means to generate a binary without SIMD you’ll need to avoid both options above plus calling into any intrinsics in this module.
Structs
 v128
target_family="wasm"
WASMspecific 128bit wide SIMD vector type.
Functions
 f32x4
target_family="wasm"
Materializes a SIMD value from the provided operands.  f32x4_abs
target_family="wasm"
andsimd128
Calculates the absolute value of each lane of a 128bit vector interpreted as four 32bit floating point numbers.  f32x4_add
target_family="wasm"
andsimd128
Lanewise addition of two 128bit vectors interpreted as four 32bit floating point numbers.  f32x4_ceil
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value not smaller than the input.  f32x4_convert_i32x4
target_family="wasm"
andsimd128
Converts a 128bit vector interpreted as four 32bit signed integers into a 128bit vector of four 32bit floating point numbers.  f32x4_convert_u32x4
target_family="wasm"
andsimd128
Converts a 128bit vector interpreted as four 32bit unsigned integers into a 128bit vector of four 32bit floating point numbers.  f32x4_demote_f64x2_zero
target_family="wasm"
andsimd128
Conversion of the two doubleprecision floating point lanes to two lower singleprecision lanes of the result. The two higher lanes of the result are initialized to zero. If the conversion result is not representable as a singleprecision floating point number, it is rounded to the nearesteven representable number.  f32x4_div
target_family="wasm"
andsimd128
Lanewise division of two 128bit vectors interpreted as four 32bit floating point numbers.  f32x4_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit floating point numbers.  f32x4_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 4 packed f32 numbers.  f32x4_floor
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value not greater than the input.  f32x4_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit floating point numbers.  f32x4_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit floating point numbers.  f32x4_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit floating point numbers.  f32x4_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit floating point numbers.  f32x4_max
target_family="wasm"
andsimd128
Calculates the lanewise minimum of two 128bit vectors interpreted as four 32bit floating point numbers.  f32x4_min
target_family="wasm"
andsimd128
Calculates the lanewise minimum of two 128bit vectors interpreted as four 32bit floating point numbers.  f32x4_mul
target_family="wasm"
andsimd128
Lanewise multiplication of two 128bit vectors interpreted as four 32bit floating point numbers.  f32x4_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit floating point numbers.  f32x4_nearest
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value; if two values are equally near, rounds to the even one.  f32x4_neg
target_family="wasm"
andsimd128
Negates each lane of a 128bit vector interpreted as four 32bit floating point numbers.  f32x4_pmax
target_family="wasm"
andsimd128
Lanewise maximum value, defined asa < b ? b : a
 f32x4_pmin
target_family="wasm"
andsimd128
Lanewise minimum value, defined asb < a ? b : a
 f32x4_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 4 packed f32 numbers.  f32x4_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  f32x4_sqrt
target_family="wasm"
andsimd128
Calculates the square root of each lane of a 128bit vector interpreted as four 32bit floating point numbers.  f32x4_sub
target_family="wasm"
andsimd128
Lanewise subtraction of two 128bit vectors interpreted as four 32bit floating point numbers.  f32x4_trunc
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value with the magnitude not larger than the input.  f64x2
target_family="wasm"
Materializes a SIMD value from the provided operands.  f64x2_abs
target_family="wasm"
andsimd128
Calculates the absolute value of each lane of a 128bit vector interpreted as two 64bit floating point numbers.  f64x2_add
target_family="wasm"
andsimd128
Lanewise add of two 128bit vectors interpreted as two 64bit floating point numbers.  f64x2_ceil
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value not smaller than the input.  f64x2_convert_low_i32x4
target_family="wasm"
andsimd128
Lanewise conversion from integer to floating point.  f64x2_convert_low_u32x4
target_family="wasm"
andsimd128
Lanewise conversion from integer to floating point.  f64x2_div
target_family="wasm"
andsimd128
Lanewise divide of two 128bit vectors interpreted as two 64bit floating point numbers.  f64x2_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit floating point numbers.  f64x2_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 2 packed f64 numbers.  f64x2_floor
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value not greater than the input.  f64x2_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit floating point numbers.  f64x2_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit floating point numbers.  f64x2_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit floating point numbers.  f64x2_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit floating point numbers.  f64x2_max
target_family="wasm"
andsimd128
Calculates the lanewise maximum of two 128bit vectors interpreted as two 64bit floating point numbers.  f64x2_min
target_family="wasm"
andsimd128
Calculates the lanewise minimum of two 128bit vectors interpreted as two 64bit floating point numbers.  f64x2_mul
target_family="wasm"
andsimd128
Lanewise multiply of two 128bit vectors interpreted as two 64bit floating point numbers.  f64x2_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit floating point numbers.  f64x2_nearest
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value; if two values are equally near, rounds to the even one.  f64x2_neg
target_family="wasm"
andsimd128
Negates each lane of a 128bit vector interpreted as two 64bit floating point numbers.  f64x2_pmax
target_family="wasm"
andsimd128
Lanewise maximum value, defined asa < b ? b : a
 f64x2_pmin
target_family="wasm"
andsimd128
Lanewise minimum value, defined asb < a ? b : a
 f64x2_promote_low_f32x4
target_family="wasm"
andsimd128
Conversion of the two lower singleprecision floating point lanes to the two doubleprecision lanes of the result.  f64x2_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 2 packed f64 numbers.  f64x2_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  f64x2_sqrt
target_family="wasm"
andsimd128
Calculates the square root of each lane of a 128bit vector interpreted as two 64bit floating point numbers.  f64x2_sub
target_family="wasm"
andsimd128
Lanewise subtract of two 128bit vectors interpreted as two 64bit floating point numbers.  f64x2_trunc
target_family="wasm"
andsimd128
Lanewise rounding to the nearest integral value with the magnitude not larger than the input.  i8x16
target_family="wasm"
Materializes a SIMD value from the provided operands.  i8x16_abs
target_family="wasm"
andsimd128
Lanewise wrapping absolute value.  i8x16_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed sixteen 8bit integers.  i8x16_add_sat
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed sixteen 8bit signed integers, saturating on overflow toi8::MAX
.  i8x16_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  i8x16_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  i8x16_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit integers.  i8x16_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 16 packed i8 numbers.  i8x16_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit signed integers.  i8x16_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit signed integers.  i8x16_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit signed integers.  i8x16_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit signed integers.  i8x16_max
target_family="wasm"
andsimd128
Compares lanewise signed integers, and returns the maximum of each pair.  i8x16_min
target_family="wasm"
andsimd128
Compares lanewise signed integers, and returns the minimum of each pair.  i8x16_narrow_i16x8
target_family="wasm"
andsimd128
Converts two input vectors into a smaller lane vector by narrowing each lane.  i8x16_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit integers.  i8x16_neg
target_family="wasm"
andsimd128
Negates a 128bit vectors interpreted as sixteen 8bit signed integers  i8x16_popcnt
target_family="wasm"
andsimd128
Count the number of bits set to one within each lane.  i8x16_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 16 packed i8 numbers.  i8x16_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  i8x16_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, sign extending.  i8x16_shuffle
target_family="wasm"
andsimd128
Returns a new vector with lanes selected from the lanes of the two input vectors$a
and$b
specified in the 16 immediate operands.  i8x16_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  i8x16_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed sixteen 8bit integers.  i8x16_sub_sat
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed sixteen 8bit signed integers, saturating on overflow toi8::MIN
.  i8x16_swizzle
target_family="wasm"
andsimd128
Returns a new vector with lanes selected from the lanes of the first input vectora
specified in the second input vectors
.  i16x8
target_family="wasm"
Materializes a SIMD value from the provided operands.  i16x8_abs
target_family="wasm"
andsimd128
Lanewise wrapping absolute value.  i16x8_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed eight 16bit integers.  i16x8_add_sat
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed eight 16bit signed integers, saturating on overflow toi16::MAX
.  i16x8_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  i16x8_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  i16x8_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit integers.  i16x8_extadd_pairwise_i8x16
target_family="wasm"
andsimd128
Integer extended pairwise addition producing extended results (twice wider results than the inputs).  i16x8_extadd_pairwise_u8x16
target_family="wasm"
andsimd128
Integer extended pairwise addition producing extended results (twice wider results than the inputs).  i16x8_extend_high_i8x16
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, sign extended.  i16x8_extend_high_u8x16
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, zero extended.  i16x8_extend_low_i8x16
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, sign extended.  i16x8_extend_low_u8x16
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, zero extended.  i16x8_extmul_high_i8x16
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i16x8_extmul_high_u8x16
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i16x8_extmul_low_i8x16
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i16x8_extmul_low_u8x16
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i16x8_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 8 packed i16 numbers.  i16x8_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit signed integers.  i16x8_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit signed integers.  i16x8_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit signed integers.  i16x8_load_extend_i8x8^{⚠}
target_family="wasm"
andsimd128
Load eight 8bit integers and sign extend each one to a 16bit lane  i16x8_load_extend_u8x8^{⚠}
target_family="wasm"
andsimd128
Load eight 8bit integers and zero extend each one to a 16bit lane  i16x8_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit signed integers.  i16x8_max
target_family="wasm"
andsimd128
Compares lanewise signed integers, and returns the maximum of each pair.  i16x8_min
target_family="wasm"
andsimd128
Compares lanewise signed integers, and returns the minimum of each pair.  i16x8_mul
target_family="wasm"
andsimd128
Multiplies two 128bit vectors as if they were two packed eight 16bit signed integers.  i16x8_narrow_i32x4
target_family="wasm"
andsimd128
Converts two input vectors into a smaller lane vector by narrowing each lane.  i16x8_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit integers.  i16x8_neg
target_family="wasm"
andsimd128
Negates a 128bit vectors interpreted as eight 16bit signed integers  i16x8_q15mulr_sat
target_family="wasm"
andsimd128
Lanewise saturating rounding multiplication in Q15 format.  i16x8_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 8 packed i16 numbers.  i16x8_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  i16x8_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, sign extending.  i16x8_shuffle
target_family="wasm"
andsimd128
Same asi8x16_shuffle
, except operates as if the inputs were eight 16bit integers, only taking 8 indices to shuffle.  i16x8_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  i16x8_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed eight 16bit integers.  i16x8_sub_sat
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed eight 16bit signed integers, saturating on overflow toi16::MIN
.  i32x4
target_family="wasm"
Materializes a SIMD value from the provided operands.  i32x4_abs
target_family="wasm"
andsimd128
Lanewise wrapping absolute value.  i32x4_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed four 32bit integers.  i32x4_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  i32x4_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  i32x4_dot_i16x8
target_family="wasm"
andsimd128
Lanewise multiply signed 16bit integers in the two input vectors and add adjacent pairs of the full 32bit results.  i32x4_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit integers.  i32x4_extadd_pairwise_i16x8
target_family="wasm"
andsimd128
Integer extended pairwise addition producing extended results (twice wider results than the inputs).  i32x4_extadd_pairwise_u16x8
target_family="wasm"
andsimd128
Integer extended pairwise addition producing extended results (twice wider results than the inputs).  i32x4_extend_high_i16x8
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, sign extended.  i32x4_extend_high_u16x8
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, zero extended.  i32x4_extend_low_i16x8
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, sign extended.  i32x4_extend_low_u16x8
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, zero extended.  i32x4_extmul_high_i16x8
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i32x4_extmul_high_u16x8
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i32x4_extmul_low_i16x8
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i32x4_extmul_low_u16x8
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i32x4_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 4 packed i32 numbers.  i32x4_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit signed integers.  i32x4_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit signed integers.  i32x4_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit signed integers.  i32x4_load_extend_i16x4^{⚠}
target_family="wasm"
andsimd128
Load four 16bit integers and sign extend each one to a 32bit lane  i32x4_load_extend_u16x4^{⚠}
target_family="wasm"
andsimd128
Load four 16bit integers and zero extend each one to a 32bit lane  i32x4_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit signed integers.  i32x4_max
target_family="wasm"
andsimd128
Compares lanewise signed integers, and returns the maximum of each pair.  i32x4_min
target_family="wasm"
andsimd128
Compares lanewise signed integers, and returns the minimum of each pair.  i32x4_mul
target_family="wasm"
andsimd128
Multiplies two 128bit vectors as if they were two packed four 32bit signed integers.  i32x4_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit integers.  i32x4_neg
target_family="wasm"
andsimd128
Negates a 128bit vectors interpreted as four 32bit signed integers  i32x4_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 4 packed i32 numbers.  i32x4_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  i32x4_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, sign extending.  i32x4_shuffle
target_family="wasm"
andsimd128
Same asi8x16_shuffle
, except operates as if the inputs were four 32bit integers, only taking 4 indices to shuffle.  i32x4_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  i32x4_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed four 32bit integers.  i32x4_trunc_sat_f32x4
target_family="wasm"
andsimd128
Converts a 128bit vector interpreted as four 32bit floating point numbers into a 128bit vector of four 32bit signed integers.  i32x4_trunc_sat_f64x2_zero
target_family="wasm"
andsimd128
Saturating conversion of the two doubleprecision floating point lanes to two lower integer lanes using the IEEEconvertToIntegerTowardZero
function.  i64x2
target_family="wasm"
Materializes a SIMD value from the provided operands.  i64x2_abs
target_family="wasm"
andsimd128
Lanewise wrapping absolute value.  i64x2_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed two 64bit integers.  i64x2_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  i64x2_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  i64x2_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit integers.  i64x2_extend_high_i32x4
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, sign extended.  i64x2_extend_high_u32x4
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, zero extended.  i64x2_extend_low_i32x4
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, sign extended.  i64x2_extend_low_u32x4
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, zero extended.  i64x2_extmul_high_i32x4
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i64x2_extmul_high_u32x4
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i64x2_extmul_low_i32x4
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i64x2_extmul_low_u32x4
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  i64x2_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 2 packed i64 numbers.  i64x2_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit signed integers.  i64x2_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit signed integers.  i64x2_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit signed integers.  i64x2_load_extend_i32x2^{⚠}
target_family="wasm"
andsimd128
Load two 32bit integers and sign extend each one to a 64bit lane  i64x2_load_extend_u32x2^{⚠}
target_family="wasm"
andsimd128
Load two 32bit integers and zero extend each one to a 64bit lane  i64x2_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit signed integers.  i64x2_mul
target_family="wasm"
andsimd128
Multiplies two 128bit vectors as if they were two packed two 64bit integers.  i64x2_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit integers.  i64x2_neg
target_family="wasm"
andsimd128
Negates a 128bit vectors interpreted as two 64bit signed integers  i64x2_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 2 packed i64 numbers.  i64x2_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  i64x2_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, sign extending.  i64x2_shuffle
target_family="wasm"
andsimd128
Same asi8x16_shuffle
, except operates as if the inputs were two 64bit integers, only taking 2 indices to shuffle.  i64x2_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  i64x2_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed two 64bit integers.  memory_grow
target_family="wasm"
Corresponding intrinsic to wasm’smemory.grow
instruction  memory_size
target_family="wasm"
Corresponding intrinsic to wasm’smemory.size
instruction  u8x16
target_family="wasm"
Materializes a SIMD value from the provided operands.  u8x16_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed sixteen 8bit integers.  u8x16_add_sat
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed sixteen 8bit unsigned integers, saturating on overflow tou8::MAX
.  u8x16_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  u8x16_avgr
target_family="wasm"
andsimd128
Lanewise rounding average.  u8x16_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  u8x16_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit integers.  u8x16_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 16 packed u8 numbers.  u8x16_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit unsigned integers.  u8x16_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit unsigned integers.  u8x16_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit unsigned integers.  u8x16_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit unsigned integers.  u8x16_max
target_family="wasm"
andsimd128
Compares lanewise unsigned integers, and returns the maximum of each pair.  u8x16_min
target_family="wasm"
andsimd128
Compares lanewise unsigned integers, and returns the minimum of each pair.  u8x16_narrow_i16x8
target_family="wasm"
andsimd128
Converts two input vectors into a smaller lane vector by narrowing each lane.  u8x16_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 16 eightbit integers.  u8x16_popcnt
target_family="wasm"
andsimd128
Count the number of bits set to one within each lane.  u8x16_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 16 packed u8 numbers.  u8x16_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  u8x16_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, shifting in zeros.  u8x16_shuffle
target_family="wasm"
andsimd128
Returns a new vector with lanes selected from the lanes of the two input vectors$a
and$b
specified in the 16 immediate operands.  u8x16_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  u8x16_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed sixteen 8bit integers.  u8x16_sub_sat
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed sixteen 8bit unsigned integers, saturating on overflow to 0.  u8x16_swizzle
target_family="wasm"
andsimd128
Returns a new vector with lanes selected from the lanes of the first input vectora
specified in the second input vectors
.  u16x8
target_family="wasm"
Materializes a SIMD value from the provided operands.  u16x8_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed eight 16bit integers.  u16x8_add_sat
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed eight 16bit unsigned integers, saturating on overflow tou16::MAX
.  u16x8_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  u16x8_avgr
target_family="wasm"
andsimd128
Lanewise rounding average.  u16x8_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  u16x8_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit integers.  u16x8_extadd_pairwise_u8x16
target_family="wasm"
andsimd128
Integer extended pairwise addition producing extended results (twice wider results than the inputs).  u16x8_extend_high_u8x16
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, zero extended.  u16x8_extend_low_u8x16
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, zero extended.  u16x8_extmul_high_u8x16
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  u16x8_extmul_low_u8x16
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  u16x8_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 8 packed u16 numbers.  u16x8_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit unsigned integers.  u16x8_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit unsigned integers.  u16x8_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit unsigned integers.  u16x8_load_extend_u8x8^{⚠}
target_family="wasm"
andsimd128
Load eight 8bit integers and zero extend each one to a 16bit lane  u16x8_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit unsigned integers.  u16x8_max
target_family="wasm"
andsimd128
Compares lanewise unsigned integers, and returns the maximum of each pair.  u16x8_min
target_family="wasm"
andsimd128
Compares lanewise unsigned integers, and returns the minimum of each pair.  u16x8_mul
target_family="wasm"
andsimd128
Multiplies two 128bit vectors as if they were two packed eight 16bit signed integers.  u16x8_narrow_i32x4
target_family="wasm"
andsimd128
Converts two input vectors into a smaller lane vector by narrowing each lane.  u16x8_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 8 sixteenbit integers.  u16x8_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 8 packed u16 numbers.  u16x8_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  u16x8_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, shifting in zeros.  u16x8_shuffle
target_family="wasm"
andsimd128
Same asi8x16_shuffle
, except operates as if the inputs were eight 16bit integers, only taking 8 indices to shuffle.  u16x8_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  u16x8_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed eight 16bit integers.  u16x8_sub_sat
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed eight 16bit unsigned integers, saturating on overflow to 0.  u32x4
target_family="wasm"
Materializes a SIMD value from the provided operands.  u32x4_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed four 32bit integers.  u32x4_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  u32x4_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  u32x4_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit integers.  u32x4_extadd_pairwise_u16x8
target_family="wasm"
andsimd128
Integer extended pairwise addition producing extended results (twice wider results than the inputs).  u32x4_extend_high_u16x8
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, zero extended.  u32x4_extend_low_u16x8
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, zero extended.  u32x4_extmul_high_u16x8
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  u32x4_extmul_low_u16x8
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  u32x4_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 4 packed u32 numbers.  u32x4_ge
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit unsigned integers.  u32x4_gt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit unsigned integers.  u32x4_le
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit unsigned integers.  u32x4_load_extend_u16x4^{⚠}
target_family="wasm"
andsimd128
Load four 16bit integers and zero extend each one to a 32bit lane  u32x4_lt
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit unsigned integers.  u32x4_max
target_family="wasm"
andsimd128
Compares lanewise unsigned integers, and returns the maximum of each pair.  u32x4_min
target_family="wasm"
andsimd128
Compares lanewise unsigned integers, and returns the minimum of each pair.  u32x4_mul
target_family="wasm"
andsimd128
Multiplies two 128bit vectors as if they were two packed four 32bit signed integers.  u32x4_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 4 thirtytwobit integers.  u32x4_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 4 packed u32 numbers.  u32x4_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  u32x4_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, shifting in zeros.  u32x4_shuffle
target_family="wasm"
andsimd128
Same asi8x16_shuffle
, except operates as if the inputs were four 32bit integers, only taking 4 indices to shuffle.  u32x4_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  u32x4_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed four 32bit integers.  u32x4_trunc_sat_f32x4
target_family="wasm"
andsimd128
Converts a 128bit vector interpreted as four 32bit floating point numbers into a 128bit vector of four 32bit unsigned integers.  u32x4_trunc_sat_f64x2_zero
target_family="wasm"
andsimd128
Saturating conversion of the two doubleprecision floating point lanes to two lower integer lanes using the IEEEconvertToIntegerTowardZero
function.  u64x2
target_family="wasm"
Materializes a SIMD value from the provided operands.  u64x2_add
target_family="wasm"
andsimd128
Adds two 128bit vectors as if they were two packed two 64bit integers.  u64x2_all_true
target_family="wasm"
andsimd128
Returns true if all lanes are nonzero, false otherwise.  u64x2_bitmask
target_family="wasm"
andsimd128
Extracts the high bit for each lane ina
and produce a scalar mask with all bits concatenated.  u64x2_eq
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit integers.  u64x2_extend_high_u32x4
target_family="wasm"
andsimd128
Converts high half of the smaller lane vector to a larger lane vector, zero extended.  u64x2_extend_low_u32x4
target_family="wasm"
andsimd128
Converts low half of the smaller lane vector to a larger lane vector, zero extended.  u64x2_extmul_high_u32x4
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  u64x2_extmul_low_u32x4
target_family="wasm"
andsimd128
Lanewise integer extended multiplication producing twice wider result than the inputs.  u64x2_extract_lane
target_family="wasm"
andsimd128
Extracts a lane from a 128bit vector interpreted as 2 packed u64 numbers.  u64x2_load_extend_u32x2^{⚠}
target_family="wasm"
andsimd128
Load two 32bit integers and zero extend each one to a 64bit lane  u64x2_mul
target_family="wasm"
andsimd128
Multiplies two 128bit vectors as if they were two packed two 64bit integers.  u64x2_ne
target_family="wasm"
andsimd128
Compares two 128bit vectors as if they were two vectors of 2 sixtyfourbit integers.  u64x2_replace_lane
target_family="wasm"
andsimd128
Replaces a lane from a 128bit vector interpreted as 2 packed u64 numbers.  u64x2_shl
target_family="wasm"
andsimd128
Shifts each lane to the left by the specified number of bits.  u64x2_shr
target_family="wasm"
andsimd128
Shifts each lane to the right by the specified number of bits, shifting in zeros.  u64x2_shuffle
target_family="wasm"
andsimd128
Same asi8x16_shuffle
, except operates as if the inputs were two 64bit integers, only taking 2 indices to shuffle.  u64x2_splat
target_family="wasm"
andsimd128
Creates a vector with identical lanes.  u64x2_sub
target_family="wasm"
andsimd128
Subtracts two 128bit vectors as if they were two packed two 64bit integers.  unreachable
target_family="wasm"
Generates theunreachable
instruction, which causes an unconditional trap.  v128_and
target_family="wasm"
andsimd128
Performs a bitwise and of the two input 128bit vectors, returning the resulting vector.  v128_andnot
target_family="wasm"
andsimd128
Bitwise AND of bits ofa
and the logical inverse of bits ofb
.  v128_any_true
target_family="wasm"
andsimd128
Returnstrue
if any bit ina
is set, orfalse
otherwise.  v128_bitselect
target_family="wasm"
andsimd128
Use the bitmask inc
to select bits fromv1
when 1 andv2
when 0.  v128_load^{⚠}
target_family="wasm"
andsimd128
Loads av128
vector from the given heap address.  v128_load8_lane^{⚠}
target_family="wasm"
andsimd128
Loads an 8bit value fromm
and sets laneL
ofv
to that value.  v128_load8_splat^{⚠}
target_family="wasm"
andsimd128
Load a single element and splat to all lanes of a v128 vector.  v128_load16_lane^{⚠}
target_family="wasm"
andsimd128
Loads a 16bit value fromm
and sets laneL
ofv
to that value.  v128_load16_splat^{⚠}
target_family="wasm"
andsimd128
Load a single element and splat to all lanes of a v128 vector.  v128_load32_lane^{⚠}
target_family="wasm"
andsimd128
Loads a 32bit value fromm
and sets laneL
ofv
to that value.  v128_load32_splat^{⚠}
target_family="wasm"
andsimd128
Load a single element and splat to all lanes of a v128 vector.  v128_load32_zero^{⚠}
target_family="wasm"
andsimd128
Load a 32bit element into the low bits of the vector and sets all other bits to zero.  v128_load64_lane^{⚠}
target_family="wasm"
andsimd128
Loads a 64bit value fromm
and sets laneL
ofv
to that value.  v128_load64_splat^{⚠}
target_family="wasm"
andsimd128
Load a single element and splat to all lanes of a v128 vector.  v128_load64_zero^{⚠}
target_family="wasm"
andsimd128
Load a 64bit element into the low bits of the vector and sets all other bits to zero.  v128_not
target_family="wasm"
andsimd128
Flips each bit of the 128bit input vector.  v128_or
target_family="wasm"
andsimd128
Performs a bitwise or of the two input 128bit vectors, returning the resulting vector.  v128_store^{⚠}
target_family="wasm"
andsimd128
Stores av128
vector to the given heap address.  v128_store8_lane^{⚠}
target_family="wasm"
andsimd128
Stores the 8bit value from laneL
ofv
intom
 v128_store16_lane^{⚠}
target_family="wasm"
andsimd128
Stores the 16bit value from laneL
ofv
intom
 v128_store32_lane^{⚠}
target_family="wasm"
andsimd128
Stores the 32bit value from laneL
ofv
intom
 v128_store64_lane^{⚠}
target_family="wasm"
andsimd128
Stores the 64bit value from laneL
ofv
intom
 v128_xor
target_family="wasm"
andsimd128
Performs a bitwise xor of the two input 128bit vectors, returning the resulting vector.  Computes
a * b + c
with either one rounding or two roundings.  A relaxed version of
f32x4_max
which is eitherf32x4_max
orf32x4_pmax
.  A relaxed version of
f32x4_min
which is eitherf32x4_min
orf32x4_pmin
.  Computes
a * b + c
with either one rounding or two roundings.  Computes
a * b + c
with either one rounding or two roundings.  A relaxed version of
f64x2_max
which is eitherf64x2_max
orf64x2_pmax
.  A relaxed version of
f64x2_min
which is eitherf64x2_min
orf64x2_pmin
.  Computes
a * b + c
with either one rounding or two roundings.  A relaxed version of
v128_bitselect
where this either behaves the same asv128_bitselect
or the high bit of each lanem
is inspected and the corresponding lane ofa
is chosen if the bit is 1 or the lane ofb
is chosen if it’s zero.  A relaxed version of
i8x16_swizzle(a, s)
which selects lanes froma
using indices ins
.  A relaxed dotproduct instruction.
 A relaxed version of
v128_bitselect
where this either behaves the same asv128_bitselect
or the high bit of each lanem
is inspected and the corresponding lane ofa
is chosen if the bit is 1 or the lane ofb
is chosen if it’s zero.  A relaxed version of
i16x8_relaxed_q15mulr
where if both lanes arei16::MIN
then the result is eitheri16::MIN
ori16::MAX
.  Similar to
i16x8_relaxed_dot_i8x16_i7x16
except that the intermediatei16x8
result is fed intoi32x4_extadd_pairwise_i16x8
followed byi32x4_add
to add the valuec
to the result.  A relaxed version of
v128_bitselect
where this either behaves the same asv128_bitselect
or the high bit of each lanem
is inspected and the corresponding lane ofa
is chosen if the bit is 1 or the lane ofb
is chosen if it’s zero.  A relaxed version of
i32x4_trunc_sat_f32x4(a)
converts thef32
lanes ofa
to signed 32bit integers.  A relaxed version of
i32x4_trunc_sat_f64x2_zero(a)
converts thef64
lanes ofa
to signed 32bit integers and the upper two lanes are zero.  A relaxed version of
v128_bitselect
where this either behaves the same asv128_bitselect
or the high bit of each lanem
is inspected and the corresponding lane ofa
is chosen if the bit is 1 or the lane ofb
is chosen if it’s zero.  Corresponding intrinsic to wasm’s
memory.atomic.notify
instruction  Corresponding intrinsic to wasm’s
memory.atomic.wait32
instruction  Corresponding intrinsic to wasm’s
memory.atomic.wait64
instruction  A relaxed version of
u32x4_trunc_sat_f32x4(a)
converts thef32
lanes ofa
to unsigned 32bit integers.  A relaxed version of
u32x4_trunc_sat_f64x2_zero(a)
converts thef64
lanes ofa
to unsigned 32bit integers and the upper two lanes are zero.