core::arch

Module wasm64

source
🔬This is a nightly-only experimental API. (simd_wasm64 #90599)
Available on WebAssembly only.
Expand description

Platform-specific intrinsics for the wasm64 platform.

See the module documentation for more details.

Structs§

  • v128Experimentaltarget_family="wasm"
    WASM-specific 128-bit wide SIMD vector type.

Functions§

  • f32x4Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • f32x4_absExperimentaltarget_family="wasm" and simd128
    Calculates the absolute value of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.
  • f32x4_addExperimentaltarget_family="wasm" and simd128
    Lane-wise addition of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_ceilExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not smaller than the input.
  • f32x4_convert_i32x4Experimentaltarget_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit signed integers into a 128-bit vector of four 32-bit floating point numbers.
  • f32x4_convert_u32x4Experimentaltarget_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit unsigned integers into a 128-bit vector of four 32-bit floating point numbers.
  • f32x4_demote_f64x2_zeroExperimentaltarget_family="wasm" and simd128
    Conversion of the two double-precision floating point lanes to two lower single-precision lanes of the result. The two higher lanes of the result are initialized to zero. If the conversion result is not representable as a single-precision floating point number, it is rounded to the nearest-even representable number.
  • f32x4_divExperimentaltarget_family="wasm" and simd128
    Lane-wise division of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 4 packed f32 numbers.
  • f32x4_floorExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not greater than the input.
  • f32x4_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_maxExperimentaltarget_family="wasm" and simd128
    Calculates the lane-wise minimum of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_minExperimentaltarget_family="wasm" and simd128
    Calculates the lane-wise minimum of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_mulExperimentaltarget_family="wasm" and simd128
    Lane-wise multiplication of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_nearestExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.
  • f32x4_negExperimentaltarget_family="wasm" and simd128
    Negates each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.
  • f32x4_pmaxExperimentaltarget_family="wasm" and simd128
    Lane-wise maximum value, defined as a < b ? b : a
  • f32x4_pminExperimentaltarget_family="wasm" and simd128
    Lane-wise minimum value, defined as b < a ? b : a
  • f32x4_relaxed_maddExperimentaltarget_family="wasm" and relaxed-simd
    Computes a * b + c with either one rounding or two roundings.
  • f32x4_relaxed_maxExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f32x4_max which is either f32x4_max or f32x4_pmax.
  • f32x4_relaxed_minExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f32x4_min which is either f32x4_min or f32x4_pmin.
  • f32x4_relaxed_nmaddExperimentaltarget_family="wasm" and relaxed-simd
    Computes -a * b + c with either one rounding or two roundings.
  • f32x4_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 4 packed f32 numbers.
  • f32x4_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • f32x4_sqrtExperimentaltarget_family="wasm" and simd128
    Calculates the square root of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.
  • f32x4_subExperimentaltarget_family="wasm" and simd128
    Lane-wise subtraction of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_truncExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.
  • f64x2Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • f64x2_absExperimentaltarget_family="wasm" and simd128
    Calculates the absolute value of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.
  • f64x2_addExperimentaltarget_family="wasm" and simd128
    Lane-wise add of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_ceilExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not smaller than the input.
  • f64x2_convert_low_i32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise conversion from integer to floating point.
  • f64x2_convert_low_u32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise conversion from integer to floating point.
  • f64x2_divExperimentaltarget_family="wasm" and simd128
    Lane-wise divide of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 2 packed f64 numbers.
  • f64x2_floorExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not greater than the input.
  • f64x2_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_maxExperimentaltarget_family="wasm" and simd128
    Calculates the lane-wise maximum of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_minExperimentaltarget_family="wasm" and simd128
    Calculates the lane-wise minimum of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_mulExperimentaltarget_family="wasm" and simd128
    Lane-wise multiply of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_nearestExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.
  • f64x2_negExperimentaltarget_family="wasm" and simd128
    Negates each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.
  • f64x2_pmaxExperimentaltarget_family="wasm" and simd128
    Lane-wise maximum value, defined as a < b ? b : a
  • f64x2_pminExperimentaltarget_family="wasm" and simd128
    Lane-wise minimum value, defined as b < a ? b : a
  • f64x2_promote_low_f32x4Experimentaltarget_family="wasm" and simd128
    Conversion of the two lower single-precision floating point lanes to the two double-precision lanes of the result.
  • f64x2_relaxed_maddExperimentaltarget_family="wasm" and relaxed-simd
    Computes a * b + c with either one rounding or two roundings.
  • f64x2_relaxed_maxExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f64x2_max which is either f64x2_max or f64x2_pmax.
  • f64x2_relaxed_minExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f64x2_min which is either f64x2_min or f64x2_pmin.
  • f64x2_relaxed_nmaddExperimentaltarget_family="wasm" and relaxed-simd
    Computes -a * b + c with either one rounding or two roundings.
  • f64x2_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 2 packed f64 numbers.
  • f64x2_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • f64x2_sqrtExperimentaltarget_family="wasm" and simd128
    Calculates the square root of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.
  • f64x2_subExperimentaltarget_family="wasm" and simd128
    Lane-wise subtract of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_truncExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.
  • i8x16Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i8x16_absExperimentaltarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i8x16_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • i8x16_add_satExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MAX.
  • i8x16_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i8x16_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i8x16_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • i8x16_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 16 packed i8 numbers.
  • i8x16_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_maxExperimentaltarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the maximum of each pair.
  • i8x16_minExperimentaltarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the minimum of each pair.
  • i8x16_narrow_i16x8Experimentaltarget_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • i8x16_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • i8x16_negExperimentaltarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as sixteen 8-bit signed integers
  • i8x16_popcntExperimentaltarget_family="wasm" and simd128
    Count the number of bits set to one within each lane.
  • i8x16_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i8x16_relaxed_swizzleExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i8x16_swizzle(a, s) which selects lanes from a using indices in s.
  • i8x16_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 16 packed i8 numbers.
  • i8x16_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i8x16_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i8x16_shuffleExperimentaltarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.
  • i8x16_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i8x16_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • i8x16_sub_satExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MIN.
  • i8x16_swizzleExperimentaltarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.
  • i16x8Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i16x8_absExperimentaltarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i16x8_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit integers.
  • i16x8_add_satExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MAX.
  • i16x8_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i16x8_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i16x8_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • i16x8_extadd_pairwise_i8x16Experimentaltarget_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i16x8_extadd_pairwise_u8x16Experimentaltarget_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i16x8_extend_high_i8x16Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, sign extended.
  • i16x8_extend_high_u8x16Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • i16x8_extend_low_i8x16Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, sign extended.
  • i16x8_extend_low_u8x16Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • i16x8_extmul_high_i8x16Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extmul_high_u8x16Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extmul_low_i8x16Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extmul_low_u8x16Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 8 packed i16 numbers.
  • i16x8_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_load_extend_i8x8âš Experimentaltarget_family="wasm" and simd128
    Load eight 8-bit integers and sign extend each one to a 16-bit lane
  • i16x8_load_extend_u8x8âš Experimentaltarget_family="wasm" and simd128
    Load eight 8-bit integers and zero extend each one to a 16-bit lane
  • i16x8_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_maxExperimentaltarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the maximum of each pair.
  • i16x8_minExperimentaltarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the minimum of each pair.
  • i16x8_mulExperimentaltarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed eight 16-bit signed integers.
  • i16x8_narrow_i32x4Experimentaltarget_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • i16x8_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • i16x8_negExperimentaltarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as eight 16-bit signed integers
  • i16x8_q15mulr_satExperimentaltarget_family="wasm" and simd128
    Lane-wise saturating rounding multiplication in Q15 format.
  • i16x8_relaxed_dot_i8x16_i7x16Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed dot-product instruction.
  • i16x8_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i16x8_relaxed_q15mulrExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i16x8_relaxed_q15mulr where if both lanes are i16::MIN then the result is either i16::MIN or i16::MAX.
  • i16x8_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 8 packed i16 numbers.
  • i16x8_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i16x8_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i16x8_shuffleExperimentaltarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.
  • i16x8_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i16x8_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.
  • i16x8_sub_satExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MIN.
  • i32x4Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i32x4_absExperimentaltarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i32x4_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed four 32-bit integers.
  • i32x4_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i32x4_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i32x4_dot_i16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise multiply signed 16-bit integers in the two input vectors and add adjacent pairs of the full 32-bit results.
  • i32x4_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • i32x4_extadd_pairwise_i16x8Experimentaltarget_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i32x4_extadd_pairwise_u16x8Experimentaltarget_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i32x4_extend_high_i16x8Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, sign extended.
  • i32x4_extend_high_u16x8Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • i32x4_extend_low_i16x8Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, sign extended.
  • i32x4_extend_low_u16x8Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • i32x4_extmul_high_i16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extmul_high_u16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extmul_low_i16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extmul_low_u16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 4 packed i32 numbers.
  • i32x4_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_load_extend_i16x4âš Experimentaltarget_family="wasm" and simd128
    Load four 16-bit integers and sign extend each one to a 32-bit lane
  • i32x4_load_extend_u16x4âš Experimentaltarget_family="wasm" and simd128
    Load four 16-bit integers and zero extend each one to a 32-bit lane
  • i32x4_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_maxExperimentaltarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the maximum of each pair.
  • i32x4_minExperimentaltarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the minimum of each pair.
  • i32x4_mulExperimentaltarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.
  • i32x4_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • i32x4_negExperimentaltarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as four 32-bit signed integers
  • i32x4_relaxed_dot_i8x16_i7x16_addExperimentaltarget_family="wasm" and relaxed-simd
    Similar to i16x8_relaxed_dot_i8x16_i7x16 except that the intermediate i16x8 result is fed into i32x4_extadd_pairwise_i16x8 followed by i32x4_add to add the value c to the result.
  • i32x4_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i32x4_relaxed_trunc_f32x4Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i32x4_trunc_sat_f32x4(a) converts the f32 lanes of a to signed 32-bit integers.
  • i32x4_relaxed_trunc_f64x2_zeroExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i32x4_trunc_sat_f64x2_zero(a) converts the f64 lanes of a to signed 32-bit integers and the upper two lanes are zero.
  • i32x4_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 4 packed i32 numbers.
  • i32x4_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i32x4_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i32x4_shuffleExperimentaltarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were four 32-bit integers, only taking 4 indices to shuffle.
  • i32x4_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i32x4_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.
  • i32x4_trunc_sat_f32x4Experimentaltarget_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit signed integers.
  • i32x4_trunc_sat_f64x2_zeroExperimentaltarget_family="wasm" and simd128
    Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.
  • i64x2Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i64x2_absExperimentaltarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i64x2_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed two 64-bit integers.
  • i64x2_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i64x2_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i64x2_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • i64x2_extend_high_i32x4Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, sign extended.
  • i64x2_extend_high_u32x4Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • i64x2_extend_low_i32x4Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, sign extended.
  • i64x2_extend_low_u32x4Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • i64x2_extmul_high_i32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extmul_high_u32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extmul_low_i32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extmul_low_u32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 2 packed i64 numbers.
  • i64x2_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_load_extend_i32x2âš Experimentaltarget_family="wasm" and simd128
    Load two 32-bit integers and sign extend each one to a 64-bit lane
  • i64x2_load_extend_u32x2âš Experimentaltarget_family="wasm" and simd128
    Load two 32-bit integers and zero extend each one to a 64-bit lane
  • i64x2_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_mulExperimentaltarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.
  • i64x2_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • i64x2_negExperimentaltarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as two 64-bit signed integers
  • i64x2_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i64x2_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 2 packed i64 numbers.
  • i64x2_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i64x2_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i64x2_shuffleExperimentaltarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were two 64-bit integers, only taking 2 indices to shuffle.
  • i64x2_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i64x2_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.
  • memory_atomic_notifyâš Experimentaltarget_family="wasm" and atomics
    Corresponding intrinsic to wasm’s memory.atomic.notify instruction
  • memory_atomic_wait32âš Experimentaltarget_family="wasm" and atomics
    Corresponding intrinsic to wasm’s memory.atomic.wait32 instruction
  • memory_atomic_wait64âš Experimentaltarget_family="wasm" and atomics
    Corresponding intrinsic to wasm’s memory.atomic.wait64 instruction
  • memory_growExperimentaltarget_family="wasm"
    Corresponding intrinsic to wasm’s memory.grow instruction
  • memory_sizeExperimentaltarget_family="wasm"
    Corresponding intrinsic to wasm’s memory.size instruction
  • throwâš Experimentaltarget_family="wasm"
    Generates the throw instruction from the exception-handling proposal for WASM.
  • u8x16Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u8x16_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • u8x16_add_satExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to u8::MAX.
  • u8x16_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u8x16_avgrExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding average.
  • u8x16_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u8x16_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • u8x16_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 16 packed u8 numbers.
  • u8x16_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_maxExperimentaltarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the maximum of each pair.
  • u8x16_minExperimentaltarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the minimum of each pair.
  • u8x16_narrow_i16x8Experimentaltarget_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • u8x16_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • u8x16_popcntExperimentaltarget_family="wasm" and simd128
    Count the number of bits set to one within each lane.
  • u8x16_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • u8x16_relaxed_swizzleExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i8x16_swizzle(a, s) which selects lanes from a using indices in s.
  • u8x16_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 16 packed u8 numbers.
  • u8x16_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u8x16_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u8x16_shuffleExperimentaltarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.
  • u8x16_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u8x16_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • u8x16_sub_satExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to 0.
  • u8x16_swizzleExperimentaltarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.
  • u16x8Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u16x8_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit integers.
  • u16x8_add_satExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to u16::MAX.
  • u16x8_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u16x8_avgrExperimentaltarget_family="wasm" and simd128
    Lane-wise rounding average.
  • u16x8_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u16x8_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • u16x8_extadd_pairwise_u8x16Experimentaltarget_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • u16x8_extend_high_u8x16Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • u16x8_extend_low_u8x16Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • u16x8_extmul_high_u8x16Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u16x8_extmul_low_u8x16Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u16x8_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 8 packed u16 numbers.
  • u16x8_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_load_extend_u8x8âš Experimentaltarget_family="wasm" and simd128
    Load eight 8-bit integers and zero extend each one to a 16-bit lane
  • u16x8_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_maxExperimentaltarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the maximum of each pair.
  • u16x8_minExperimentaltarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the minimum of each pair.
  • u16x8_mulExperimentaltarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed eight 16-bit signed integers.
  • u16x8_narrow_i32x4Experimentaltarget_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • u16x8_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • u16x8_relaxed_dot_i8x16_i7x16Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed dot-product instruction.
  • u16x8_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • u16x8_relaxed_q15mulrExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i16x8_relaxed_q15mulr where if both lanes are i16::MIN then the result is either i16::MIN or i16::MAX.
  • u16x8_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 8 packed u16 numbers.
  • u16x8_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u16x8_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u16x8_shuffleExperimentaltarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.
  • u16x8_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u16x8_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.
  • u16x8_sub_satExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to 0.
  • u32x4Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u32x4_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed four 32-bit integers.
  • u32x4_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u32x4_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u32x4_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • u32x4_extadd_pairwise_u16x8Experimentaltarget_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • u32x4_extend_high_u16x8Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • u32x4_extend_low_u16x8Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • u32x4_extmul_high_u16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u32x4_extmul_low_u16x8Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u32x4_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 4 packed u32 numbers.
  • u32x4_geExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_gtExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_leExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_load_extend_u16x4âš Experimentaltarget_family="wasm" and simd128
    Load four 16-bit integers and zero extend each one to a 32-bit lane
  • u32x4_ltExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_maxExperimentaltarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the maximum of each pair.
  • u32x4_minExperimentaltarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the minimum of each pair.
  • u32x4_mulExperimentaltarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.
  • u32x4_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • u32x4_relaxed_dot_i8x16_i7x16_addExperimentaltarget_family="wasm" and relaxed-simd
    Similar to i16x8_relaxed_dot_i8x16_i7x16 except that the intermediate i16x8 result is fed into i32x4_extadd_pairwise_i16x8 followed by i32x4_add to add the value c to the result.
  • u32x4_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • u32x4_relaxed_trunc_f32x4Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of u32x4_trunc_sat_f32x4(a) converts the f32 lanes of a to unsigned 32-bit integers.
  • u32x4_relaxed_trunc_f64x2_zeroExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of u32x4_trunc_sat_f64x2_zero(a) converts the f64 lanes of a to unsigned 32-bit integers and the upper two lanes are zero.
  • u32x4_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 4 packed u32 numbers.
  • u32x4_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u32x4_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u32x4_shuffleExperimentaltarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were four 32-bit integers, only taking 4 indices to shuffle.
  • u32x4_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u32x4_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.
  • u32x4_trunc_sat_f32x4Experimentaltarget_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit unsigned integers.
  • u32x4_trunc_sat_f64x2_zeroExperimentaltarget_family="wasm" and simd128
    Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.
  • u64x2Experimentaltarget_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u64x2_addExperimentaltarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed two 64-bit integers.
  • u64x2_all_trueExperimentaltarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u64x2_bitmaskExperimentaltarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u64x2_eqExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • u64x2_extend_high_u32x4Experimentaltarget_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • u64x2_extend_low_u32x4Experimentaltarget_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • u64x2_extmul_high_u32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u64x2_extmul_low_u32x4Experimentaltarget_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u64x2_extract_laneExperimentaltarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 2 packed u64 numbers.
  • u64x2_load_extend_u32x2âš Experimentaltarget_family="wasm" and simd128
    Load two 32-bit integers and zero extend each one to a 64-bit lane
  • u64x2_mulExperimentaltarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.
  • u64x2_neExperimentaltarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • u64x2_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • u64x2_replace_laneExperimentaltarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 2 packed u64 numbers.
  • u64x2_shlExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u64x2_shrExperimentaltarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u64x2_shuffleExperimentaltarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were two 64-bit integers, only taking 2 indices to shuffle.
  • u64x2_splatExperimentaltarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u64x2_subExperimentaltarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.
  • unreachableExperimentaltarget_family="wasm"
    Generates the unreachable instruction, which causes an unconditional trap.
  • v128_andExperimentaltarget_family="wasm" and simd128
    Performs a bitwise and of the two input 128-bit vectors, returning the resulting vector.
  • v128_andnotExperimentaltarget_family="wasm" and simd128
    Bitwise AND of bits of a and the logical inverse of bits of b.
  • v128_any_trueExperimentaltarget_family="wasm" and simd128
    Returns true if any bit in a is set, or false otherwise.
  • v128_bitselectExperimentaltarget_family="wasm" and simd128
    Use the bitmask in c to select bits from v1 when 1 and v2 when 0.
  • v128_loadâš Experimentaltarget_family="wasm" and simd128
    Loads a v128 vector from the given heap address.
  • v128_load8_laneâš Experimentaltarget_family="wasm" and simd128
    Loads an 8-bit value from m and sets lane L of v to that value.
  • v128_load8_splatâš Experimentaltarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load16_laneâš Experimentaltarget_family="wasm" and simd128
    Loads a 16-bit value from m and sets lane L of v to that value.
  • v128_load16_splatâš Experimentaltarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load32_laneâš Experimentaltarget_family="wasm" and simd128
    Loads a 32-bit value from m and sets lane L of v to that value.
  • v128_load32_splatâš Experimentaltarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load32_zeroâš Experimentaltarget_family="wasm" and simd128
    Load a 32-bit element into the low bits of the vector and sets all other bits to zero.
  • v128_load64_laneâš Experimentaltarget_family="wasm" and simd128
    Loads a 64-bit value from m and sets lane L of v to that value.
  • v128_load64_splatâš Experimentaltarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load64_zeroâš Experimentaltarget_family="wasm" and simd128
    Load a 64-bit element into the low bits of the vector and sets all other bits to zero.
  • v128_notExperimentaltarget_family="wasm" and simd128
    Flips each bit of the 128-bit input vector.
  • v128_orExperimentaltarget_family="wasm" and simd128
    Performs a bitwise or of the two input 128-bit vectors, returning the resulting vector.
  • v128_storeâš Experimentaltarget_family="wasm" and simd128
    Stores a v128 vector to the given heap address.
  • v128_store8_laneâš Experimentaltarget_family="wasm" and simd128
    Stores the 8-bit value from lane L of v into m
  • v128_store16_laneâš Experimentaltarget_family="wasm" and simd128
    Stores the 16-bit value from lane L of v into m
  • v128_store32_laneâš Experimentaltarget_family="wasm" and simd128
    Stores the 32-bit value from lane L of v into m
  • v128_store64_laneâš Experimentaltarget_family="wasm" and simd128
    Stores the 64-bit value from lane L of v into m
  • v128_xorExperimentaltarget_family="wasm" and simd128
    Performs a bitwise xor of the two input 128-bit vectors, returning the resulting vector.