Available on x86 only.
Expand description
Platform-specific intrinsics for the x86
platform.
See the module documentation for more details.
Structs§
- Cpuid
Result x86 or x86-64 - Result of the
cpuid
instruction. - __m128
x86 or x86-64 - 128-bit wide set of four
f32
types, x86-specific - __m256
x86 or x86-64 - 256-bit wide set of eight
f32
types, x86-specific - __m512
x86 or x86-64 - 512-bit wide set of sixteen
f32
types, x86-specific - __m128d
x86 or x86-64 - 128-bit wide set of two
f64
types, x86-specific - __m128i
x86 or x86-64 - 128-bit wide integer vector type, x86-specific
- __m256d
x86 or x86-64 - 256-bit wide set of four
f64
types, x86-specific - __m256i
x86 or x86-64 - 256-bit wide integer vector type, x86-specific
- __m512d
x86 or x86-64 - 512-bit wide set of eight
f64
types, x86-specific - __m512i
x86 or x86-64 - 512-bit wide integer vector type, x86-specific
- __
m128bh Experimental x86 or x86-64 - 128-bit wide set of eight
u16
types, x86-specific - __m128h
Experimental x86 or x86-64 - 128-bit wide set of 8
f16
types, x86-specific - __
m256bh Experimental x86 or x86-64 - 256-bit wide set of 16
u16
types, x86-specific - __m256h
Experimental x86 or x86-64 - 256-bit wide set of 16
f16
types, x86-specific - __
m512bh Experimental x86 or x86-64 - 512-bit wide set of 32
u16
types, x86-specific - __m512h
Experimental x86 or x86-64 - 512-bit wide set of 32
f16
types, x86-specific - bf16
Experimental x86 or x86-64 - The BFloat16 type used in AVX-512 intrinsics.
Constants§
- _CMP_
EQ_ OQ x86 or x86-64 - Equal (ordered, non-signaling)
- _CMP_
EQ_ OS x86 or x86-64 - Equal (ordered, signaling)
- _CMP_
EQ_ UQ x86 or x86-64 - Equal (unordered, non-signaling)
- _CMP_
EQ_ US x86 or x86-64 - Equal (unordered, signaling)
- _CMP_
FALSE_ OQ x86 or x86-64 - False (ordered, non-signaling)
- _CMP_
FALSE_ OS x86 or x86-64 - False (ordered, signaling)
- _CMP_
GE_ OQ x86 or x86-64 - Greater-than-or-equal (ordered, non-signaling)
- _CMP_
GE_ OS x86 or x86-64 - Greater-than-or-equal (ordered, signaling)
- _CMP_
GT_ OQ x86 or x86-64 - Greater-than (ordered, non-signaling)
- _CMP_
GT_ OS x86 or x86-64 - Greater-than (ordered, signaling)
- _CMP_
LE_ OQ x86 or x86-64 - Less-than-or-equal (ordered, non-signaling)
- _CMP_
LE_ OS x86 or x86-64 - Less-than-or-equal (ordered, signaling)
- _CMP_
LT_ OQ x86 or x86-64 - Less-than (ordered, non-signaling)
- _CMP_
LT_ OS x86 or x86-64 - Less-than (ordered, signaling)
- _CMP_
NEQ_ OQ x86 or x86-64 - Not-equal (ordered, non-signaling)
- _CMP_
NEQ_ OS x86 or x86-64 - Not-equal (ordered, signaling)
- _CMP_
NEQ_ UQ x86 or x86-64 - Not-equal (unordered, non-signaling)
- _CMP_
NEQ_ US x86 or x86-64 - Not-equal (unordered, signaling)
- _CMP_
NGE_ UQ x86 or x86-64 - Not-greater-than-or-equal (unordered, non-signaling)
- _CMP_
NGE_ US x86 or x86-64 - Not-greater-than-or-equal (unordered, signaling)
- _CMP_
NGT_ UQ x86 or x86-64 - Not-greater-than (unordered, non-signaling)
- _CMP_
NGT_ US x86 or x86-64 - Not-greater-than (unordered, signaling)
- _CMP_
NLE_ UQ x86 or x86-64 - Not-less-than-or-equal (unordered, non-signaling)
- _CMP_
NLE_ US x86 or x86-64 - Not-less-than-or-equal (unordered, signaling)
- _CMP_
NLT_ UQ x86 or x86-64 - Not-less-than (unordered, non-signaling)
- _CMP_
NLT_ US x86 or x86-64 - Not-less-than (unordered, signaling)
- _CMP_
ORD_ Q x86 or x86-64 - Ordered (non-signaling)
- _CMP_
ORD_ S x86 or x86-64 - Ordered (signaling)
- _CMP_
TRUE_ UQ x86 or x86-64 - True (unordered, non-signaling)
- _CMP_
TRUE_ US x86 or x86-64 - True (unordered, signaling)
- _CMP_
UNORD_ Q x86 or x86-64 - Unordered (non-signaling)
- _CMP_
UNORD_ S x86 or x86-64 - Unordered (signaling)
- _MM_
EXCEPT_ DENORM x86 or x86-64 - See
_mm_setcsr
- _MM_
EXCEPT_ DIV_ ZERO x86 or x86-64 - See
_mm_setcsr
- _MM_
EXCEPT_ INEXACT x86 or x86-64 - See
_mm_setcsr
- _MM_
EXCEPT_ INVALID x86 or x86-64 - See
_mm_setcsr
- _MM_
EXCEPT_ MASK x86 or x86-64 - See
_MM_GET_EXCEPTION_STATE
- _MM_
EXCEPT_ OVERFLOW x86 or x86-64 - See
_mm_setcsr
- _MM_
EXCEPT_ UNDERFLOW x86 or x86-64 - See
_mm_setcsr
- _MM_
FLUSH_ ZERO_ MASK x86 or x86-64 - See
_MM_GET_FLUSH_ZERO_MODE
- _MM_
FLUSH_ ZERO_ OFF x86 or x86-64 - See
_mm_setcsr
- _MM_
FLUSH_ ZERO_ ON x86 or x86-64 - See
_mm_setcsr
- _MM_
FROUND_ CEIL x86 or x86-64 - round up and do not suppress exceptions
- _MM_
FROUND_ CUR_ DIRECTION x86 or x86-64 - use MXCSR.RC; see
vendor::_MM_SET_ROUNDING_MODE
- _MM_
FROUND_ FLOOR x86 or x86-64 - round down and do not suppress exceptions
- _MM_
FROUND_ NEARBYINT x86 or x86-64 - use MXCSR.RC and suppress exceptions; see
vendor::_MM_SET_ROUNDING_MODE
- _MM_
FROUND_ NINT x86 or x86-64 - round to nearest and do not suppress exceptions
- _MM_
FROUND_ NO_ EXC x86 or x86-64 - suppress exceptions
- _MM_
FROUND_ RAISE_ EXC x86 or x86-64 - do not suppress exceptions
- _MM_
FROUND_ RINT x86 or x86-64 - use MXCSR.RC and do not suppress exceptions; see
vendor::_MM_SET_ROUNDING_MODE
- _MM_
FROUND_ TO_ NEAREST_ INT x86 or x86-64 - round to nearest
- _MM_
FROUND_ TO_ NEG_ INF x86 or x86-64 - round down
- _MM_
FROUND_ TO_ POS_ INF x86 or x86-64 - round up
- _MM_
FROUND_ TO_ ZERO x86 or x86-64 - truncate
- _MM_
FROUND_ TRUNC x86 or x86-64 - truncate and do not suppress exceptions
- _MM_
HINT_ ET0 x86 or x86-64 - See
_mm_prefetch
. - _MM_
HINT_ ET1 x86 or x86-64 - See
_mm_prefetch
. - _MM_
HINT_ NTA x86 or x86-64 - See
_mm_prefetch
. - _MM_
HINT_ T0 x86 or x86-64 - See
_mm_prefetch
. - _MM_
HINT_ T1 x86 or x86-64 - See
_mm_prefetch
. - _MM_
HINT_ T2 x86 or x86-64 - See
_mm_prefetch
. - _MM_
MASK_ DENORM x86 or x86-64 - See
_mm_setcsr
- _MM_
MASK_ DIV_ ZERO x86 or x86-64 - See
_mm_setcsr
- _MM_
MASK_ INEXACT x86 or x86-64 - See
_mm_setcsr
- _MM_
MASK_ INVALID x86 or x86-64 - See
_mm_setcsr
- _MM_
MASK_ MASK x86 or x86-64 - See
_MM_GET_EXCEPTION_MASK
- _MM_
MASK_ OVERFLOW x86 or x86-64 - See
_mm_setcsr
- _MM_
MASK_ UNDERFLOW x86 or x86-64 - See
_mm_setcsr
- _MM_
ROUND_ DOWN x86 or x86-64 - See
_mm_setcsr
- _MM_
ROUND_ MASK x86 or x86-64 - See
_MM_GET_ROUNDING_MODE
- _MM_
ROUND_ NEAREST x86 or x86-64 - See
_mm_setcsr
- _MM_
ROUND_ TOWARD_ ZERO x86 or x86-64 - See
_mm_setcsr
- _MM_
ROUND_ UP x86 or x86-64 - See
_mm_setcsr
- _SIDD_
BIT_ MASK x86 or x86-64 - Mask only: return the bit mask
- _SIDD_
CMP_ EQUAL_ ANY x86 or x86-64 - For each character in
a
, find if it is inb
(Default) - _SIDD_
CMP_ EQUAL_ EACH x86 or x86-64 - The strings defined by
a
andb
are equal - _SIDD_
CMP_ EQUAL_ ORDERED x86 or x86-64 - Search for the defined substring in the target
- _SIDD_
CMP_ RANGES x86 or x86-64 - For each character in
a
, determine ifb[0] <= c <= b[1] or b[1] <= c <= b[2]...
- _SIDD_
LEAST_ SIGNIFICANT x86 or x86-64 - Index only: return the least significant bit (Default)
- _SIDD_
MASKED_ NEGATIVE_ POLARITY x86 or x86-64 - Negates results only before the end of the string
- _SIDD_
MASKED_ POSITIVE_ POLARITY x86 or x86-64 - Do not negate results before the end of the string
- _SIDD_
MOST_ SIGNIFICANT x86 or x86-64 - Index only: return the most significant bit
- _SIDD_
NEGATIVE_ POLARITY x86 or x86-64 - Negates results
- _SIDD_
POSITIVE_ POLARITY x86 or x86-64 - Do not negate results (Default)
- _SIDD_
SBYTE_ OPS x86 or x86-64 - String contains signed 8-bit characters
- _SIDD_
SWORD_ OPS x86 or x86-64 - String contains unsigned 16-bit characters
- _SIDD_
UBYTE_ OPS x86 or x86-64 - String contains unsigned 8-bit characters (Default)
- _SIDD_
UNIT_ MASK x86 or x86-64 - Mask only: return the byte mask
- _SIDD_
UWORD_ OPS x86 or x86-64 - String contains unsigned 16-bit characters
- _XCR_
XFEATURE_ ENABLED_ MASK x86 or x86-64 XFEATURE_ENABLED_MASK
forXCR
- _MM_
CMPINT_ EQ Experimental x86 or x86-64 - Equal
- _MM_
CMPINT_ FALSE Experimental x86 or x86-64 - False
- _MM_
CMPINT_ LE Experimental x86 or x86-64 - Less-than-or-equal
- _MM_
CMPINT_ LT Experimental x86 or x86-64 - Less-than
- _MM_
CMPINT_ NE Experimental x86 or x86-64 - Not-equal
- _MM_
CMPINT_ NLE Experimental x86 or x86-64 - Not less-than-or-equal
- _MM_
CMPINT_ NLT Experimental x86 or x86-64 - Not less-than
- _MM_
CMPINT_ TRUE Experimental x86 or x86-64 - True
- _MM_
MANT_ NORM_ 1_ 2 Experimental x86 or x86-64 - interval [1, 2)
- _MM_
MANT_ NORM_ P5_ 1 Experimental x86 or x86-64 - interval [0.5, 1)
- _MM_
MANT_ NORM_ P5_ 2 Experimental x86 or x86-64 - interval [0.5, 2)
- _MM_
MANT_ NORM_ P75_ 1P5 Experimental x86 or x86-64 - interval [0.75, 1.5)
- _MM_
MANT_ SIGN_ NAN Experimental x86 or x86-64 - DEST = NaN if sign(SRC) = 1
- _MM_
MANT_ SIGN_ SRC Experimental x86 or x86-64 - sign = sign(SRC)
- _MM_
MANT_ SIGN_ ZERO Experimental x86 or x86-64 - sign = 0
- _MM_
PERM_ AAAA Experimental x86 or x86-64 - _MM_
PERM_ AAAB Experimental x86 or x86-64 - _MM_
PERM_ AAAC Experimental x86 or x86-64 - _MM_
PERM_ AAAD Experimental x86 or x86-64 - _MM_
PERM_ AABA Experimental x86 or x86-64 - _MM_
PERM_ AABB Experimental x86 or x86-64 - _MM_
PERM_ AABC Experimental x86 or x86-64 - _MM_
PERM_ AABD Experimental x86 or x86-64 - _MM_
PERM_ AACA Experimental x86 or x86-64 - _MM_
PERM_ AACB Experimental x86 or x86-64 - _MM_
PERM_ AACC Experimental x86 or x86-64 - _MM_
PERM_ AACD Experimental x86 or x86-64 - _MM_
PERM_ AADA Experimental x86 or x86-64 - _MM_
PERM_ AADB Experimental x86 or x86-64 - _MM_
PERM_ AADC Experimental x86 or x86-64 - _MM_
PERM_ AADD Experimental x86 or x86-64 - _MM_
PERM_ ABAA Experimental x86 or x86-64 - _MM_
PERM_ ABAB Experimental x86 or x86-64 - _MM_
PERM_ ABAC Experimental x86 or x86-64 - _MM_
PERM_ ABAD Experimental x86 or x86-64 - _MM_
PERM_ ABBA Experimental x86 or x86-64 - _MM_
PERM_ ABBB Experimental x86 or x86-64 - _MM_
PERM_ ABBC Experimental x86 or x86-64 - _MM_
PERM_ ABBD Experimental x86 or x86-64 - _MM_
PERM_ ABCA Experimental x86 or x86-64 - _MM_
PERM_ ABCB Experimental x86 or x86-64 - _MM_
PERM_ ABCC Experimental x86 or x86-64 - _MM_
PERM_ ABCD Experimental x86 or x86-64 - _MM_
PERM_ ABDA Experimental x86 or x86-64 - _MM_
PERM_ ABDB Experimental x86 or x86-64 - _MM_
PERM_ ABDC Experimental x86 or x86-64 - _MM_
PERM_ ABDD Experimental x86 or x86-64 - _MM_
PERM_ ACAA Experimental x86 or x86-64 - _MM_
PERM_ ACAB Experimental x86 or x86-64 - _MM_
PERM_ ACAC Experimental x86 or x86-64 - _MM_
PERM_ ACAD Experimental x86 or x86-64 - _MM_
PERM_ ACBA Experimental x86 or x86-64 - _MM_
PERM_ ACBB Experimental x86 or x86-64 - _MM_
PERM_ ACBC Experimental x86 or x86-64 - _MM_
PERM_ ACBD Experimental x86 or x86-64 - _MM_
PERM_ ACCA Experimental x86 or x86-64 - _MM_
PERM_ ACCB Experimental x86 or x86-64 - _MM_
PERM_ ACCC Experimental x86 or x86-64 - _MM_
PERM_ ACCD Experimental x86 or x86-64 - _MM_
PERM_ ACDA Experimental x86 or x86-64 - _MM_
PERM_ ACDB Experimental x86 or x86-64 - _MM_
PERM_ ACDC Experimental x86 or x86-64 - _MM_
PERM_ ACDD Experimental x86 or x86-64 - _MM_
PERM_ ADAA Experimental x86 or x86-64 - _MM_
PERM_ ADAB Experimental x86 or x86-64 - _MM_
PERM_ ADAC Experimental x86 or x86-64 - _MM_
PERM_ ADAD Experimental x86 or x86-64 - _MM_
PERM_ ADBA Experimental x86 or x86-64 - _MM_
PERM_ ADBB Experimental x86 or x86-64 - _MM_
PERM_ ADBC Experimental x86 or x86-64 - _MM_
PERM_ ADBD Experimental x86 or x86-64 - _MM_
PERM_ ADCA Experimental x86 or x86-64 - _MM_
PERM_ ADCB Experimental x86 or x86-64 - _MM_
PERM_ ADCC Experimental x86 or x86-64 - _MM_
PERM_ ADCD Experimental x86 or x86-64 - _MM_
PERM_ ADDA Experimental x86 or x86-64 - _MM_
PERM_ ADDB Experimental x86 or x86-64 - _MM_
PERM_ ADDC Experimental x86 or x86-64 - _MM_
PERM_ ADDD Experimental x86 or x86-64 - _MM_
PERM_ BAAA Experimental x86 or x86-64 - _MM_
PERM_ BAAB Experimental x86 or x86-64 - _MM_
PERM_ BAAC Experimental x86 or x86-64 - _MM_
PERM_ BAAD Experimental x86 or x86-64 - _MM_
PERM_ BABA Experimental x86 or x86-64 - _MM_
PERM_ BABB Experimental x86 or x86-64 - _MM_
PERM_ BABC Experimental x86 or x86-64 - _MM_
PERM_ BABD Experimental x86 or x86-64 - _MM_
PERM_ BACA Experimental x86 or x86-64 - _MM_
PERM_ BACB Experimental x86 or x86-64 - _MM_
PERM_ BACC Experimental x86 or x86-64 - _MM_
PERM_ BACD Experimental x86 or x86-64 - _MM_
PERM_ BADA Experimental x86 or x86-64 - _MM_
PERM_ BADB Experimental x86 or x86-64 - _MM_
PERM_ BADC Experimental x86 or x86-64 - _MM_
PERM_ BADD Experimental x86 or x86-64 - _MM_
PERM_ BBAA Experimental x86 or x86-64 - _MM_
PERM_ BBAB Experimental x86 or x86-64 - _MM_
PERM_ BBAC Experimental x86 or x86-64 - _MM_
PERM_ BBAD Experimental x86 or x86-64 - _MM_
PERM_ BBBA Experimental x86 or x86-64 - _MM_
PERM_ BBBB Experimental x86 or x86-64 - _MM_
PERM_ BBBC Experimental x86 or x86-64 - _MM_
PERM_ BBBD Experimental x86 or x86-64 - _MM_
PERM_ BBCA Experimental x86 or x86-64 - _MM_
PERM_ BBCB Experimental x86 or x86-64 - _MM_
PERM_ BBCC Experimental x86 or x86-64 - _MM_
PERM_ BBCD Experimental x86 or x86-64 - _MM_
PERM_ BBDA Experimental x86 or x86-64 - _MM_
PERM_ BBDB Experimental x86 or x86-64 - _MM_
PERM_ BBDC Experimental x86 or x86-64 - _MM_
PERM_ BBDD Experimental x86 or x86-64 - _MM_
PERM_ BCAA Experimental x86 or x86-64 - _MM_
PERM_ BCAB Experimental x86 or x86-64 - _MM_
PERM_ BCAC Experimental x86 or x86-64 - _MM_
PERM_ BCAD Experimental x86 or x86-64 - _MM_
PERM_ BCBA Experimental x86 or x86-64 - _MM_
PERM_ BCBB Experimental x86 or x86-64 - _MM_
PERM_ BCBC Experimental x86 or x86-64 - _MM_
PERM_ BCBD Experimental x86 or x86-64 - _MM_
PERM_ BCCA Experimental x86 or x86-64 - _MM_
PERM_ BCCB Experimental x86 or x86-64 - _MM_
PERM_ BCCC Experimental x86 or x86-64 - _MM_
PERM_ BCCD Experimental x86 or x86-64 - _MM_
PERM_ BCDA Experimental x86 or x86-64 - _MM_
PERM_ BCDB Experimental x86 or x86-64 - _MM_
PERM_ BCDC Experimental x86 or x86-64 - _MM_
PERM_ BCDD Experimental x86 or x86-64 - _MM_
PERM_ BDAA Experimental x86 or x86-64 - _MM_
PERM_ BDAB Experimental x86 or x86-64 - _MM_
PERM_ BDAC Experimental x86 or x86-64 - _MM_
PERM_ BDAD Experimental x86 or x86-64 - _MM_
PERM_ BDBA Experimental x86 or x86-64 - _MM_
PERM_ BDBB Experimental x86 or x86-64 - _MM_
PERM_ BDBC Experimental x86 or x86-64 - _MM_
PERM_ BDBD Experimental x86 or x86-64 - _MM_
PERM_ BDCA Experimental x86 or x86-64 - _MM_
PERM_ BDCB Experimental x86 or x86-64 - _MM_
PERM_ BDCC Experimental x86 or x86-64 - _MM_
PERM_ BDCD Experimental x86 or x86-64 - _MM_
PERM_ BDDA Experimental x86 or x86-64 - _MM_
PERM_ BDDB Experimental x86 or x86-64 - _MM_
PERM_ BDDC Experimental x86 or x86-64 - _MM_
PERM_ BDDD Experimental x86 or x86-64 - _MM_
PERM_ CAAA Experimental x86 or x86-64 - _MM_
PERM_ CAAB Experimental x86 or x86-64 - _MM_
PERM_ CAAC Experimental x86 or x86-64 - _MM_
PERM_ CAAD Experimental x86 or x86-64 - _MM_
PERM_ CABA Experimental x86 or x86-64 - _MM_
PERM_ CABB Experimental x86 or x86-64 - _MM_
PERM_ CABC Experimental x86 or x86-64 - _MM_
PERM_ CABD Experimental x86 or x86-64 - _MM_
PERM_ CACA Experimental x86 or x86-64 - _MM_
PERM_ CACB Experimental x86 or x86-64 - _MM_
PERM_ CACC Experimental x86 or x86-64 - _MM_
PERM_ CACD Experimental x86 or x86-64 - _MM_
PERM_ CADA Experimental x86 or x86-64 - _MM_
PERM_ CADB Experimental x86 or x86-64 - _MM_
PERM_ CADC Experimental x86 or x86-64 - _MM_
PERM_ CADD Experimental x86 or x86-64 - _MM_
PERM_ CBAA Experimental x86 or x86-64 - _MM_
PERM_ CBAB Experimental x86 or x86-64 - _MM_
PERM_ CBAC Experimental x86 or x86-64 - _MM_
PERM_ CBAD Experimental x86 or x86-64 - _MM_
PERM_ CBBA Experimental x86 or x86-64 - _MM_
PERM_ CBBB Experimental x86 or x86-64 - _MM_
PERM_ CBBC Experimental x86 or x86-64 - _MM_
PERM_ CBBD Experimental x86 or x86-64 - _MM_
PERM_ CBCA Experimental x86 or x86-64 - _MM_
PERM_ CBCB Experimental x86 or x86-64 - _MM_
PERM_ CBCC Experimental x86 or x86-64 - _MM_
PERM_ CBCD Experimental x86 or x86-64 - _MM_
PERM_ CBDA Experimental x86 or x86-64 - _MM_
PERM_ CBDB Experimental x86 or x86-64 - _MM_
PERM_ CBDC Experimental x86 or x86-64 - _MM_
PERM_ CBDD Experimental x86 or x86-64 - _MM_
PERM_ CCAA Experimental x86 or x86-64 - _MM_
PERM_ CCAB Experimental x86 or x86-64 - _MM_
PERM_ CCAC Experimental x86 or x86-64 - _MM_
PERM_ CCAD Experimental x86 or x86-64 - _MM_
PERM_ CCBA Experimental x86 or x86-64 - _MM_
PERM_ CCBB Experimental x86 or x86-64 - _MM_
PERM_ CCBC Experimental x86 or x86-64 - _MM_
PERM_ CCBD Experimental x86 or x86-64 - _MM_
PERM_ CCCA Experimental x86 or x86-64 - _MM_
PERM_ CCCB Experimental x86 or x86-64 - _MM_
PERM_ CCCC Experimental x86 or x86-64 - _MM_
PERM_ CCCD Experimental x86 or x86-64 - _MM_
PERM_ CCDA Experimental x86 or x86-64 - _MM_
PERM_ CCDB Experimental x86 or x86-64 - _MM_
PERM_ CCDC Experimental x86 or x86-64 - _MM_
PERM_ CCDD Experimental x86 or x86-64 - _MM_
PERM_ CDAA Experimental x86 or x86-64 - _MM_
PERM_ CDAB Experimental x86 or x86-64 - _MM_
PERM_ CDAC Experimental x86 or x86-64 - _MM_
PERM_ CDAD Experimental x86 or x86-64 - _MM_
PERM_ CDBA Experimental x86 or x86-64 - _MM_
PERM_ CDBB Experimental x86 or x86-64 - _MM_
PERM_ CDBC Experimental x86 or x86-64 - _MM_
PERM_ CDBD Experimental x86 or x86-64 - _MM_
PERM_ CDCA Experimental x86 or x86-64 - _MM_
PERM_ CDCB Experimental x86 or x86-64 - _MM_
PERM_ CDCC Experimental x86 or x86-64 - _MM_
PERM_ CDCD Experimental x86 or x86-64 - _MM_
PERM_ CDDA Experimental x86 or x86-64 - _MM_
PERM_ CDDB Experimental x86 or x86-64 - _MM_
PERM_ CDDC Experimental x86 or x86-64 - _MM_
PERM_ CDDD Experimental x86 or x86-64 - _MM_
PERM_ DAAA Experimental x86 or x86-64 - _MM_
PERM_ DAAB Experimental x86 or x86-64 - _MM_
PERM_ DAAC Experimental x86 or x86-64 - _MM_
PERM_ DAAD Experimental x86 or x86-64 - _MM_
PERM_ DABA Experimental x86 or x86-64 - _MM_
PERM_ DABB Experimental x86 or x86-64 - _MM_
PERM_ DABC Experimental x86 or x86-64 - _MM_
PERM_ DABD Experimental x86 or x86-64 - _MM_
PERM_ DACA Experimental x86 or x86-64 - _MM_
PERM_ DACB Experimental x86 or x86-64 - _MM_
PERM_ DACC Experimental x86 or x86-64 - _MM_
PERM_ DACD Experimental x86 or x86-64 - _MM_
PERM_ DADA Experimental x86 or x86-64 - _MM_
PERM_ DADB Experimental x86 or x86-64 - _MM_
PERM_ DADC Experimental x86 or x86-64 - _MM_
PERM_ DADD Experimental x86 or x86-64 - _MM_
PERM_ DBAA Experimental x86 or x86-64 - _MM_
PERM_ DBAB Experimental x86 or x86-64 - _MM_
PERM_ DBAC Experimental x86 or x86-64 - _MM_
PERM_ DBAD Experimental x86 or x86-64 - _MM_
PERM_ DBBA Experimental x86 or x86-64 - _MM_
PERM_ DBBB Experimental x86 or x86-64 - _MM_
PERM_ DBBC Experimental x86 or x86-64 - _MM_
PERM_ DBBD Experimental x86 or x86-64 - _MM_
PERM_ DBCA Experimental x86 or x86-64 - _MM_
PERM_ DBCB Experimental x86 or x86-64 - _MM_
PERM_ DBCC Experimental x86 or x86-64 - _MM_
PERM_ DBCD Experimental x86 or x86-64 - _MM_
PERM_ DBDA Experimental x86 or x86-64 - _MM_
PERM_ DBDB Experimental x86 or x86-64 - _MM_
PERM_ DBDC Experimental x86 or x86-64 - _MM_
PERM_ DBDD Experimental x86 or x86-64 - _MM_
PERM_ DCAA Experimental x86 or x86-64 - _MM_
PERM_ DCAB Experimental x86 or x86-64 - _MM_
PERM_ DCAC Experimental x86 or x86-64 - _MM_
PERM_ DCAD Experimental x86 or x86-64 - _MM_
PERM_ DCBA Experimental x86 or x86-64 - _MM_
PERM_ DCBB Experimental x86 or x86-64 - _MM_
PERM_ DCBC Experimental x86 or x86-64 - _MM_
PERM_ DCBD Experimental x86 or x86-64 - _MM_
PERM_ DCCA Experimental x86 or x86-64 - _MM_
PERM_ DCCB Experimental x86 or x86-64 - _MM_
PERM_ DCCC Experimental x86 or x86-64 - _MM_
PERM_ DCCD Experimental x86 or x86-64 - _MM_
PERM_ DCDA Experimental x86 or x86-64 - _MM_
PERM_ DCDB Experimental x86 or x86-64 - _MM_
PERM_ DCDC Experimental x86 or x86-64 - _MM_
PERM_ DCDD Experimental x86 or x86-64 - _MM_
PERM_ DDAA Experimental x86 or x86-64 - _MM_
PERM_ DDAB Experimental x86 or x86-64 - _MM_
PERM_ DDAC Experimental x86 or x86-64 - _MM_
PERM_ DDAD Experimental x86 or x86-64 - _MM_
PERM_ DDBA Experimental x86 or x86-64 - _MM_
PERM_ DDBB Experimental x86 or x86-64 - _MM_
PERM_ DDBC Experimental x86 or x86-64 - _MM_
PERM_ DDBD Experimental x86 or x86-64 - _MM_
PERM_ DDCA Experimental x86 or x86-64 - _MM_
PERM_ DDCB Experimental x86 or x86-64 - _MM_
PERM_ DDCC Experimental x86 or x86-64 - _MM_
PERM_ DDCD Experimental x86 or x86-64 - _MM_
PERM_ DDDA Experimental x86 or x86-64 - _MM_
PERM_ DDDB Experimental x86 or x86-64 - _MM_
PERM_ DDDC Experimental x86 or x86-64 - _MM_
PERM_ DDDD Experimental x86 or x86-64 - _XABORT_
CAPACITY Experimental x86 or x86-64 - Transaction abort due to the transaction using too much memory.
- _XABORT_
CONFLICT Experimental x86 or x86-64 - Transaction abort due to a memory conflict with another thread.
- _XABORT_
DEBUG Experimental x86 or x86-64 - Transaction abort due to a debug trap.
- _XABORT_
EXPLICIT Experimental x86 or x86-64 - Transaction explicitly aborted with xabort. The parameter passed to xabort is available with
_xabort_code(status)
. - _XABORT_
NESTED Experimental x86 or x86-64 - Transaction abort in a inner nested transaction.
- _XABORT_
RETRY Experimental x86 or x86-64 - Transaction retry is possible.
- _XBEGIN_
STARTED Experimental x86 or x86-64 - Transaction successfully started.
Functions§
- _MM_
GET_ ⚠EXCEPTION_ MASK Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
GET_ ⚠EXCEPTION_ STATE Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
GET_ ⚠FLUSH_ ZERO_ MODE Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
GET_ ⚠ROUNDING_ MODE Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
SET_ ⚠EXCEPTION_ MASK Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
SET_ ⚠EXCEPTION_ STATE Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
SET_ ⚠FLUSH_ ZERO_ MODE Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
SET_ ⚠ROUNDING_ MODE Deprecated (x86 or x86-64) and sse
- See
_mm_setcsr
- _MM_
TRANSPOS ⚠E4_ PS (x86 or x86-64) and sse
- Transpose the 4x4 matrix formed by 4 rows of __m128 in place.
- __cpuid⚠
x86 or x86-64 - See
__cpuid_count
. - __
cpuid_ ⚠count x86 or x86-64 - Returns the result of the
cpuid
instruction for a givenleaf
(EAX
) andsub_leaf
(ECX
). - __
get_ ⚠cpuid_ max x86 or x86-64 - Returns the highest-supported
leaf
(EAX
) and sub-leaf (ECX
)cpuid
values. - __
rdtscp ⚠x86 or x86-64 - Reads the current value of the processor’s time-stamp counter and
the
IA32_TSC_AUX MSR
. - _addcarry_
u32 ⚠x86 or x86-64 - Adds unsigned 32-bit integers
a
andb
with unsigned 8-bit carry-inc_in
(carry or overflow flag), and store the unsigned 32-bit result inout
, and the carry-out is returned (carry or overflow flag). - _addcarryx_
u32 ⚠(x86 or x86-64) and adx
- Adds unsigned 32-bit integers
a
andb
with unsigned 8-bit carry-inc_in
(carry or overflow flag), and store the unsigned 32-bit result inout
, and the carry-out is returned (carry or overflow flag). - _andn_
u32 ⚠(x86 or x86-64) and bmi1
- Bitwise logical
AND
of inverteda
withb
. - _bextr2_
u32 ⚠(x86 or x86-64) and bmi1
- Extracts bits of
a
specified bycontrol
into the least significant bits of the result. - _bextr_
u32 ⚠(x86 or x86-64) and bmi1
- Extracts bits in range [
start
,start
+length
) froma
into the least significant bits of the result. - _bextri_
u32 ⚠(x86 or x86-64) and tbm
- Extracts bits of
a
specified bycontrol
into the least significant bits of the result. - _bittest⚠
x86 or x86-64 - Returns the bit in position
b
of the memory addressed byp
. - _bittestandcomplement⚠
x86 or x86-64 - Returns the bit in position
b
of the memory addressed byp
, then inverts that bit. - _bittestandreset⚠
x86 or x86-64 - Returns the bit in position
b
of the memory addressed byp
, then resets that bit to0
. - _bittestandset⚠
x86 or x86-64 - Returns the bit in position
b
of the memory addressed byp
, then sets the bit to1
. - _blcfill_
u32 ⚠(x86 or x86-64) and tbm
- Clears all bits below the least significant zero bit of
x
. - _blci_
u32 ⚠(x86 or x86-64) and tbm
- Sets all bits of
x
to 1 except for the least significant zero bit. - _blcic_
u32 ⚠(x86 or x86-64) and tbm
- Sets the least significant zero bit of
x
and clears all other bits. - _blcmsk_
u32 ⚠(x86 or x86-64) and tbm
- Sets the least significant zero bit of
x
and clears all bits above that bit. - _blcs_
u32 ⚠(x86 or x86-64) and tbm
- Sets the least significant zero bit of
x
. - _blsfill_
u32 ⚠(x86 or x86-64) and tbm
- Sets all bits of
x
below the least significant one. - _blsi_
u32 ⚠(x86 or x86-64) and bmi1
- Extracts lowest set isolated bit.
- _blsic_
u32 ⚠(x86 or x86-64) and tbm
- Clears least significant bit and sets all other bits.
- _blsmsk_
u32 ⚠(x86 or x86-64) and bmi1
- Gets mask up to lowest set bit.
- _blsr_
u32 ⚠(x86 or x86-64) and bmi1
- Resets the lowest set bit of
x
. - _bswap⚠
x86 or x86-64 - Returns an integer with the reversed byte order of x
- _bzhi_
u32 ⚠(x86 or x86-64) and bmi2
- Zeroes higher bits of
a
>=index
. - _fxrstor⚠
(x86 or x86-64) and fxsr
- Restores the
XMM
,MMX
,MXCSR
, andx87
FPU registers from the 512-byte-long 16-byte-aligned memory regionmem_addr
. - _fxsave⚠
(x86 or x86-64) and fxsr
- Saves the
x87
FPU,MMX
technology,XMM
, andMXCSR
registers to the 512-byte-long 16-byte-aligned memory regionmem_addr
. - _lzcnt_
u32 ⚠(x86 or x86-64) and lzcnt
- Counts the leading most significant zero bits.
- _mm256_
abs_ ⚠epi8 (x86 or x86-64) and avx2
- Computes the absolute values of packed 8-bit integers in
a
. - _mm256_
abs_ ⚠epi16 (x86 or x86-64) and avx2
- Computes the absolute values of packed 16-bit integers in
a
. - _mm256_
abs_ ⚠epi32 (x86 or x86-64) and avx2
- Computes the absolute values of packed 32-bit integers in
a
. - _mm256_
add_ ⚠epi8 (x86 or x86-64) and avx2
- Adds packed 8-bit integers in
a
andb
. - _mm256_
add_ ⚠epi16 (x86 or x86-64) and avx2
- Adds packed 16-bit integers in
a
andb
. - _mm256_
add_ ⚠epi32 (x86 or x86-64) and avx2
- Adds packed 32-bit integers in
a
andb
. - _mm256_
add_ ⚠epi64 (x86 or x86-64) and avx2
- Adds packed 64-bit integers in
a
andb
. - _mm256_
add_ ⚠pd (x86 or x86-64) and avx
- Adds packed double-precision (64-bit) floating-point elements
in
a
andb
. - _mm256_
add_ ⚠ps (x86 or x86-64) and avx
- Adds packed single-precision (32-bit) floating-point elements in
a
andb
. - _mm256_
adds_ ⚠epi8 (x86 or x86-64) and avx2
- Adds packed 8-bit integers in
a
andb
using saturation. - _mm256_
adds_ ⚠epi16 (x86 or x86-64) and avx2
- Adds packed 16-bit integers in
a
andb
using saturation. - _mm256_
adds_ ⚠epu8 (x86 or x86-64) and avx2
- Adds packed unsigned 8-bit integers in
a
andb
using saturation. - _mm256_
adds_ ⚠epu16 (x86 or x86-64) and avx2
- Adds packed unsigned 16-bit integers in
a
andb
using saturation. - _mm256_
addsub_ ⚠pd (x86 or x86-64) and avx
- Alternatively adds and subtracts packed double-precision (64-bit)
floating-point elements in
a
to/from packed elements inb
. - _mm256_
addsub_ ⚠ps (x86 or x86-64) and avx
- Alternatively adds and subtracts packed single-precision (32-bit)
floating-point elements in
a
to/from packed elements inb
. - _mm256_
alignr_ ⚠epi8 (x86 or x86-64) and avx2
- Concatenates pairs of 16-byte blocks in
a
andb
into a 32-byte temporary result, shifts the result right byn
bytes, and returns the low 16 bytes. - _mm256_
and_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of a packed double-precision (64-bit)
floating-point elements in
a
andb
. - _mm256_
and_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of packed single-precision (32-bit) floating-point
elements in
a
andb
. - _mm256_
and_ ⚠si256 (x86 or x86-64) and avx2
- Computes the bitwise AND of 256 bits (representing integer data)
in
a
andb
. - _mm256_
andnot_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise NOT of packed double-precision (64-bit) floating-point
elements in
a
, and then AND withb
. - _mm256_
andnot_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise NOT of packed single-precision (32-bit) floating-point
elements in
a
and then AND withb
. - _mm256_
andnot_ ⚠si256 (x86 or x86-64) and avx2
- Computes the bitwise NOT of 256 bits (representing integer data)
in
a
and then AND withb
. - _mm256_
avg_ ⚠epu8 (x86 or x86-64) and avx2
- Averages packed unsigned 8-bit integers in
a
andb
. - _mm256_
avg_ ⚠epu16 (x86 or x86-64) and avx2
- Averages packed unsigned 16-bit integers in
a
andb
. - _mm256_
blend_ ⚠epi16 (x86 or x86-64) and avx2
- Blends packed 16-bit integers from
a
andb
using control maskIMM8
. - _mm256_
blend_ ⚠epi32 (x86 or x86-64) and avx2
- Blends packed 32-bit integers from
a
andb
using control maskIMM8
. - _mm256_
blend_ ⚠pd (x86 or x86-64) and avx
- Blends packed double-precision (64-bit) floating-point elements from
a
andb
using control maskimm8
. - _mm256_
blend_ ⚠ps (x86 or x86-64) and avx
- Blends packed single-precision (32-bit) floating-point elements from
a
andb
using control maskimm8
. - _mm256_
blendv_ ⚠epi8 (x86 or x86-64) and avx2
- Blends packed 8-bit integers from
a
andb
usingmask
. - _mm256_
blendv_ ⚠pd (x86 or x86-64) and avx
- Blends packed double-precision (64-bit) floating-point elements from
a
andb
usingc
as a mask. - _mm256_
blendv_ ⚠ps (x86 or x86-64) and avx
- Blends packed single-precision (32-bit) floating-point elements from
a
andb
usingc
as a mask. - _mm256_
broadcast_ ⚠pd (x86 or x86-64) and avx
- Broadcasts 128 bits from memory (composed of 2 packed double-precision (64-bit) floating-point elements) to all elements of the returned vector.
- _mm256_
broadcast_ ⚠ps (x86 or x86-64) and avx
- Broadcasts 128 bits from memory (composed of 4 packed single-precision (32-bit) floating-point elements) to all elements of the returned vector.
- _mm256_
broadcast_ ⚠sd (x86 or x86-64) and avx
- Broadcasts a double-precision (64-bit) floating-point element from memory to all elements of the returned vector.
- _mm256_
broadcast_ ⚠ss (x86 or x86-64) and avx
- Broadcasts a single-precision (32-bit) floating-point element from memory to all elements of the returned vector.
- _mm256_
broadcastb_ ⚠epi8 (x86 or x86-64) and avx2
- Broadcasts the low packed 8-bit integer from
a
to all elements of the 256-bit returned value. - _mm256_
broadcastd_ ⚠epi32 (x86 or x86-64) and avx2
- Broadcasts the low packed 32-bit integer from
a
to all elements of the 256-bit returned value. - _mm256_
broadcastq_ ⚠epi64 (x86 or x86-64) and avx2
- Broadcasts the low packed 64-bit integer from
a
to all elements of the 256-bit returned value. - _mm256_
broadcastsd_ ⚠pd (x86 or x86-64) and avx2
- Broadcasts the low double-precision (64-bit) floating-point element
from
a
to all elements of the 256-bit returned value. - _mm256_
broadcastsi128_ ⚠si256 (x86 or x86-64) and avx2
- Broadcasts 128 bits of integer data from a to all 128-bit lanes in the 256-bit returned value.
- _mm256_
broadcastss_ ⚠ps (x86 or x86-64) and avx2
- Broadcasts the low single-precision (32-bit) floating-point element
from
a
to all elements of the 256-bit returned value. - _mm256_
broadcastw_ ⚠epi16 (x86 or x86-64) and avx2
- Broadcasts the low packed 16-bit integer from a to all elements of the 256-bit returned value
- _mm256_
bslli_ ⚠epi128 (x86 or x86-64) and avx2
- Shifts 128-bit lanes in
a
left byimm8
bytes while shifting in zeros. - _mm256_
bsrli_ ⚠epi128 (x86 or x86-64) and avx2
- Shifts 128-bit lanes in
a
right byimm8
bytes while shifting in zeros. - _mm256_
castpd128_ ⚠pd256 (x86 or x86-64) and avx
- Casts vector of type __m128d to type __m256d; the upper 128 bits of the result are undefined.
- _mm256_
castpd256_ ⚠pd128 (x86 or x86-64) and avx
- Casts vector of type __m256d to type __m128d.
- _mm256_
castpd_ ⚠ps (x86 or x86-64) and avx
- Cast vector of type __m256d to type __m256.
- _mm256_
castpd_ ⚠si256 (x86 or x86-64) and avx
- Casts vector of type __m256d to type __m256i.
- _mm256_
castps128_ ⚠ps256 (x86 or x86-64) and avx
- Casts vector of type __m128 to type __m256; the upper 128 bits of the result are undefined.
- _mm256_
castps256_ ⚠ps128 (x86 or x86-64) and avx
- Casts vector of type __m256 to type __m128.
- _mm256_
castps_ ⚠pd (x86 or x86-64) and avx
- Cast vector of type __m256 to type __m256d.
- _mm256_
castps_ ⚠si256 (x86 or x86-64) and avx
- Casts vector of type __m256 to type __m256i.
- _mm256_
castsi128_ ⚠si256 (x86 or x86-64) and avx
- Casts vector of type __m128i to type __m256i; the upper 128 bits of the result are undefined.
- _mm256_
castsi256_ ⚠pd (x86 or x86-64) and avx
- Casts vector of type __m256i to type __m256d.
- _mm256_
castsi256_ ⚠ps (x86 or x86-64) and avx
- Casts vector of type __m256i to type __m256.
- _mm256_
castsi256_ ⚠si128 (x86 or x86-64) and avx
- Casts vector of type __m256i to type __m128i.
- _mm256_
ceil_ ⚠pd (x86 or x86-64) and avx
- Rounds packed double-precision (64-bit) floating point elements in
a
toward positive infinity. - _mm256_
ceil_ ⚠ps (x86 or x86-64) and avx
- Rounds packed single-precision (32-bit) floating point elements in
a
toward positive infinity. - _mm256_
cmp_ ⚠pd (x86 or x86-64) and avx
- Compares packed double-precision (64-bit) floating-point
elements in
a
andb
based on the comparison operand specified byIMM5
. - _mm256_
cmp_ ⚠ps (x86 or x86-64) and avx
- Compares packed single-precision (32-bit) floating-point
elements in
a
andb
based on the comparison operand specified byIMM5
. - _mm256_
cmpeq_ ⚠epi8 (x86 or x86-64) and avx2
- Compares packed 8-bit integers in
a
andb
for equality. - _mm256_
cmpeq_ ⚠epi16 (x86 or x86-64) and avx2
- Compares packed 16-bit integers in
a
andb
for equality. - _mm256_
cmpeq_ ⚠epi32 (x86 or x86-64) and avx2
- Compares packed 32-bit integers in
a
andb
for equality. - _mm256_
cmpeq_ ⚠epi64 (x86 or x86-64) and avx2
- Compares packed 64-bit integers in
a
andb
for equality. - _mm256_
cmpgt_ ⚠epi8 (x86 or x86-64) and avx2
- Compares packed 8-bit integers in
a
andb
for greater-than. - _mm256_
cmpgt_ ⚠epi16 (x86 or x86-64) and avx2
- Compares packed 16-bit integers in
a
andb
for greater-than. - _mm256_
cmpgt_ ⚠epi32 (x86 or x86-64) and avx2
- Compares packed 32-bit integers in
a
andb
for greater-than. - _mm256_
cmpgt_ ⚠epi64 (x86 or x86-64) and avx2
- Compares packed 64-bit integers in
a
andb
for greater-than. - _mm256_
cvtepi8_ ⚠epi16 (x86 or x86-64) and avx2
- Sign-extend 8-bit integers to 16-bit integers.
- _mm256_
cvtepi8_ ⚠epi32 (x86 or x86-64) and avx2
- Sign-extend 8-bit integers to 32-bit integers.
- _mm256_
cvtepi8_ ⚠epi64 (x86 or x86-64) and avx2
- Sign-extend 8-bit integers to 64-bit integers.
- _mm256_
cvtepi16_ ⚠epi32 (x86 or x86-64) and avx2
- Sign-extend 16-bit integers to 32-bit integers.
- _mm256_
cvtepi16_ ⚠epi64 (x86 or x86-64) and avx2
- Sign-extend 16-bit integers to 64-bit integers.
- _mm256_
cvtepi32_ ⚠epi64 (x86 or x86-64) and avx2
- Sign-extend 32-bit integers to 64-bit integers.
- _mm256_
cvtepi32_ ⚠pd (x86 or x86-64) and avx
- Converts packed 32-bit integers in
a
to packed double-precision (64-bit) floating-point elements. - _mm256_
cvtepi32_ ⚠ps (x86 or x86-64) and avx
- Converts packed 32-bit integers in
a
to packed single-precision (32-bit) floating-point elements. - _mm256_
cvtepu8_ ⚠epi16 (x86 or x86-64) and avx2
- Zero-extend unsigned 8-bit integers in
a
to 16-bit integers. - _mm256_
cvtepu8_ ⚠epi32 (x86 or x86-64) and avx2
- Zero-extend the lower eight unsigned 8-bit integers in
a
to 32-bit integers. The upper eight elements ofa
are unused. - _mm256_
cvtepu8_ ⚠epi64 (x86 or x86-64) and avx2
- Zero-extend the lower four unsigned 8-bit integers in
a
to 64-bit integers. The upper twelve elements ofa
are unused. - _mm256_
cvtepu16_ ⚠epi32 (x86 or x86-64) and avx2
- Zeroes extend packed unsigned 16-bit integers in
a
to packed 32-bit integers, and stores the results indst
. - _mm256_
cvtepu16_ ⚠epi64 (x86 or x86-64) and avx2
- Zero-extend the lower four unsigned 16-bit integers in
a
to 64-bit integers. The upper four elements ofa
are unused. - _mm256_
cvtepu32_ ⚠epi64 (x86 or x86-64) and avx2
- Zero-extend unsigned 32-bit integers in
a
to 64-bit integers. - _mm256_
cvtpd_ ⚠epi32 (x86 or x86-64) and avx
- Converts packed double-precision (64-bit) floating-point elements in
a
to packed 32-bit integers. - _mm256_
cvtpd_ ⚠ps (x86 or x86-64) and avx
- Converts packed double-precision (64-bit) floating-point elements in
a
to packed single-precision (32-bit) floating-point elements. - _mm256_
cvtph_ ⚠ps (x86 or x86-64) and f16c
- Converts the 8 x 16-bit half-precision float values in the 128-bit vector
a
into 8 x 32-bit float values stored in a 256-bit wide vector. - _mm256_
cvtps_ ⚠epi32 (x86 or x86-64) and avx
- Converts packed single-precision (32-bit) floating-point elements in
a
to packed 32-bit integers. - _mm256_
cvtps_ ⚠pd (x86 or x86-64) and avx
- Converts packed single-precision (32-bit) floating-point elements in
a
to packed double-precision (64-bit) floating-point elements. - _mm256_
cvtps_ ⚠ph (x86 or x86-64) and f16c
- Converts the 8 x 32-bit float values in the 256-bit vector
a
into 8 x 16-bit half-precision float values stored in a 128-bit wide vector. - _mm256_
cvtsd_ ⚠f64 (x86 or x86-64) and avx
- Returns the first element of the input vector of
[4 x double]
. - _mm256_
cvtsi256_ ⚠si32 (x86 or x86-64) and avx
- Returns the first element of the input vector of
[8 x i32]
. - _mm256_
cvtss_ ⚠f32 (x86 or x86-64) and avx
- Returns the first element of the input vector of
[8 x float]
. - _mm256_
cvttpd_ ⚠epi32 (x86 or x86-64) and avx
- Converts packed double-precision (64-bit) floating-point elements in
a
to packed 32-bit integers with truncation. - _mm256_
cvttps_ ⚠epi32 (x86 or x86-64) and avx
- Converts packed single-precision (32-bit) floating-point elements in
a
to packed 32-bit integers with truncation. - _mm256_
div_ ⚠pd (x86 or x86-64) and avx
- Computes the division of each of the 4 packed 64-bit floating-point elements
in
a
by the corresponding packed elements inb
. - _mm256_
div_ ⚠ps (x86 or x86-64) and avx
- Computes the division of each of the 8 packed 32-bit floating-point elements
in
a
by the corresponding packed elements inb
. - _mm256_
dp_ ⚠ps (x86 or x86-64) and avx
- Conditionally multiplies the packed single-precision (32-bit) floating-point
elements in
a
andb
using the high 4 bits inimm8
, sum the four products, and conditionally return the sum using the low 4 bits ofimm8
. - _mm256_
extract_ ⚠epi8 (x86 or x86-64) and avx2
- Extracts an 8-bit integer from
a
, selected withINDEX
. Returns a 32-bit integer containing the zero-extended integer data. - _mm256_
extract_ ⚠epi16 (x86 or x86-64) and avx2
- Extracts a 16-bit integer from
a
, selected withINDEX
. Returns a 32-bit integer containing the zero-extended integer data. - _mm256_
extract_ ⚠epi32 (x86 or x86-64) and avx
- Extracts a 32-bit integer from
a
, selected withINDEX
. - _mm256_
extractf128_ ⚠pd (x86 or x86-64) and avx
- Extracts 128 bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from
a
, selected withimm8
. - _mm256_
extractf128_ ⚠ps (x86 or x86-64) and avx
- Extracts 128 bits (composed of 4 packed single-precision (32-bit)
floating-point elements) from
a
, selected withimm8
. - _mm256_
extractf128_ ⚠si256 (x86 or x86-64) and avx
- Extracts 128 bits (composed of integer data) from
a
, selected withimm8
. - _mm256_
extracti128_ ⚠si256 (x86 or x86-64) and avx2
- Extracts 128 bits (of integer data) from
a
selected withIMM1
. - _mm256_
floor_ ⚠pd (x86 or x86-64) and avx
- Rounds packed double-precision (64-bit) floating point elements in
a
toward negative infinity. - _mm256_
floor_ ⚠ps (x86 or x86-64) and avx
- Rounds packed single-precision (32-bit) floating point elements in
a
toward negative infinity. - _mm256_
fmadd_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and add the intermediate result to packed elements inc
. - _mm256_
fmadd_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and add the intermediate result to packed elements inc
. - _mm256_
fmaddsub_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and alternatively add and subtract packed elements inc
to/from the intermediate result. - _mm256_
fmaddsub_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and alternatively add and subtract packed elements inc
to/from the intermediate result. - _mm256_
fmsub_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the intermediate result. - _mm256_
fmsub_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the intermediate result. - _mm256_
fmsubadd_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and alternatively subtract and add packed elements inc
from/to the intermediate result. - _mm256_
fmsubadd_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and alternatively subtract and add packed elements inc
from/to the intermediate result. - _mm256_
fnmadd_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and add the negated intermediate result to packed elements inc
. - _mm256_
fnmadd_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and add the negated intermediate result to packed elements inc
. - _mm256_
fnmsub_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the negated intermediate result. - _mm256_
fnmsub_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the negated intermediate result. - _mm256_
hadd_ ⚠epi16 (x86 or x86-64) and avx2
- Horizontally adds adjacent pairs of 16-bit integers in
a
andb
. - _mm256_
hadd_ ⚠epi32 (x86 or x86-64) and avx2
- Horizontally adds adjacent pairs of 32-bit integers in
a
andb
. - _mm256_
hadd_ ⚠pd (x86 or x86-64) and avx
- Horizontal addition of adjacent pairs in the two packed vectors
of 4 64-bit floating points
a
andb
. In the result, sums of elements froma
are returned in even locations, while sums of elements fromb
are returned in odd locations. - _mm256_
hadd_ ⚠ps (x86 or x86-64) and avx
- Horizontal addition of adjacent pairs in the two packed vectors
of 8 32-bit floating points
a
andb
. In the result, sums of elements froma
are returned in locations of indices 0, 1, 4, 5; while sums of elements fromb
are locations 2, 3, 6, 7. - _mm256_
hadds_ ⚠epi16 (x86 or x86-64) and avx2
- Horizontally adds adjacent pairs of 16-bit integers in
a
andb
using saturation. - _mm256_
hsub_ ⚠epi16 (x86 or x86-64) and avx2
- Horizontally subtract adjacent pairs of 16-bit integers in
a
andb
. - _mm256_
hsub_ ⚠epi32 (x86 or x86-64) and avx2
- Horizontally subtract adjacent pairs of 32-bit integers in
a
andb
. - _mm256_
hsub_ ⚠pd (x86 or x86-64) and avx
- Horizontal subtraction of adjacent pairs in the two packed vectors
of 4 64-bit floating points
a
andb
. In the result, sums of elements froma
are returned in even locations, while sums of elements fromb
are returned in odd locations. - _mm256_
hsub_ ⚠ps (x86 or x86-64) and avx
- Horizontal subtraction of adjacent pairs in the two packed vectors
of 8 32-bit floating points
a
andb
. In the result, sums of elements froma
are returned in locations of indices 0, 1, 4, 5; while sums of elements fromb
are locations 2, 3, 6, 7. - _mm256_
hsubs_ ⚠epi16 (x86 or x86-64) and avx2
- Horizontally subtract adjacent pairs of 16-bit integers in
a
andb
using saturation. - _mm256_
i32gather_ ⚠epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i32gather_ ⚠epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i32gather_ ⚠pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i32gather_ ⚠ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i64gather_ ⚠epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i64gather_ ⚠epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i64gather_ ⚠pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
i64gather_ ⚠ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm256_
insert_ ⚠epi8 (x86 or x86-64) and avx
- Copies
a
to result, and inserts the 8-bit integeri
into result at the location specified byindex
. - _mm256_
insert_ ⚠epi16 (x86 or x86-64) and avx
- Copies
a
to result, and inserts the 16-bit integeri
into result at the location specified byindex
. - _mm256_
insert_ ⚠epi32 (x86 or x86-64) and avx
- Copies
a
to result, and inserts the 32-bit integeri
into result at the location specified byindex
. - _mm256_
insertf128_ ⚠pd (x86 or x86-64) and avx
- Copies
a
to result, then inserts 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) fromb
into result at the location specified byimm8
. - _mm256_
insertf128_ ⚠ps (x86 or x86-64) and avx
- Copies
a
to result, then inserts 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) fromb
into result at the location specified byimm8
. - _mm256_
insertf128_ ⚠si256 (x86 or x86-64) and avx
- Copies
a
to result, then inserts 128 bits fromb
into result at the location specified byimm8
. - _mm256_
inserti128_ ⚠si256 (x86 or x86-64) and avx2
- Copies
a
todst
, then insert 128 bits (of integer data) fromb
at the location specified byIMM1
. - _mm256_
lddqu_ ⚠si256 (x86 or x86-64) and avx
- Loads 256-bits of integer data from unaligned memory into result.
This intrinsic may perform better than
_mm256_loadu_si256
when the data crosses a cache line boundary. - _mm256_
load_ ⚠pd (x86 or x86-64) and avx
- Loads 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from memory into result.
mem_addr
must be aligned on a 32-byte boundary or a general-protection exception may be generated. - _mm256_
load_ ⚠ps (x86 or x86-64) and avx
- Loads 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from memory into result.
mem_addr
must be aligned on a 32-byte boundary or a general-protection exception may be generated. - _mm256_
load_ ⚠si256 (x86 or x86-64) and avx
- Loads 256-bits of integer data from memory into result.
mem_addr
must be aligned on a 32-byte boundary or a general-protection exception may be generated. - _mm256_
loadu2_ ⚠m128 (x86 or x86-64) and avx
- Loads two 128-bit values (composed of 4 packed single-precision (32-bit)
floating-point elements) from memory, and combine them into a 256-bit
value.
hiaddr
andloaddr
do not need to be aligned on any particular boundary. - _mm256_
loadu2_ ⚠m128d (x86 or x86-64) and avx
- Loads two 128-bit values (composed of 2 packed double-precision (64-bit)
floating-point elements) from memory, and combine them into a 256-bit
value.
hiaddr
andloaddr
do not need to be aligned on any particular boundary. - _mm256_
loadu2_ ⚠m128i (x86 or x86-64) and avx
- Loads two 128-bit values (composed of integer data) from memory, and combine
them into a 256-bit value.
hiaddr
andloaddr
do not need to be aligned on any particular boundary. - _mm256_
loadu_ ⚠pd (x86 or x86-64) and avx
- Loads 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from memory into result.
mem_addr
does not need to be aligned on any particular boundary. - _mm256_
loadu_ ⚠ps (x86 or x86-64) and avx
- Loads 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from memory into result.
mem_addr
does not need to be aligned on any particular boundary. - _mm256_
loadu_ ⚠si256 (x86 or x86-64) and avx
- Loads 256-bits of integer data from memory into result.
mem_addr
does not need to be aligned on any particular boundary. - _mm256_
madd_ ⚠epi16 (x86 or x86-64) and avx2
- Multiplies packed signed 16-bit integers in
a
andb
, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers. - _mm256_
maddubs_ ⚠epi16 (x86 or x86-64) and avx2
- Vertically multiplies each unsigned 8-bit integer from
a
with the corresponding signed 8-bit integer fromb
, producing intermediate signed 16-bit integers. Horizontally add adjacent pairs of intermediate signed 16-bit integers - _mm256_
mask_ ⚠i32gather_ epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i32gather_ epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i32gather_ pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i32gather_ ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i64gather_ epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i64gather_ epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i64gather_ pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
mask_ ⚠i64gather_ ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm256_
maskload_ ⚠epi32 (x86 or x86-64) and avx2
- Loads packed 32-bit integers from memory pointed by
mem_addr
usingmask
(elements are zeroed out when the highest bit is not set in the corresponding element). - _mm256_
maskload_ ⚠epi64 (x86 or x86-64) and avx2
- Loads packed 64-bit integers from memory pointed by
mem_addr
usingmask
(elements are zeroed out when the highest bit is not set in the corresponding element). - _mm256_
maskload_ ⚠pd (x86 or x86-64) and avx
- Loads packed double-precision (64-bit) floating-point elements from memory
into result using
mask
(elements are zeroed out when the high bit of the corresponding element is not set). - _mm256_
maskload_ ⚠ps (x86 or x86-64) and avx
- Loads packed single-precision (32-bit) floating-point elements from memory
into result using
mask
(elements are zeroed out when the high bit of the corresponding element is not set). - _mm256_
maskstore_ ⚠epi32 (x86 or x86-64) and avx2
- Stores packed 32-bit integers from
a
into memory pointed bymem_addr
usingmask
(elements are not stored when the highest bit is not set in the corresponding element). - _mm256_
maskstore_ ⚠epi64 (x86 or x86-64) and avx2
- Stores packed 64-bit integers from
a
into memory pointed bymem_addr
usingmask
(elements are not stored when the highest bit is not set in the corresponding element). - _mm256_
maskstore_ ⚠pd (x86 or x86-64) and avx
- Stores packed double-precision (64-bit) floating-point elements from
a
into memory usingmask
. - _mm256_
maskstore_ ⚠ps (x86 or x86-64) and avx
- Stores packed single-precision (32-bit) floating-point elements from
a
into memory usingmask
. - _mm256_
max_ ⚠epi8 (x86 or x86-64) and avx2
- Compares packed 8-bit integers in
a
andb
, and returns the packed maximum values. - _mm256_
max_ ⚠epi16 (x86 or x86-64) and avx2
- Compares packed 16-bit integers in
a
andb
, and returns the packed maximum values. - _mm256_
max_ ⚠epi32 (x86 or x86-64) and avx2
- Compares packed 32-bit integers in
a
andb
, and returns the packed maximum values. - _mm256_
max_ ⚠epu8 (x86 or x86-64) and avx2
- Compares packed unsigned 8-bit integers in
a
andb
, and returns the packed maximum values. - _mm256_
max_ ⚠epu16 (x86 or x86-64) and avx2
- Compares packed unsigned 16-bit integers in
a
andb
, and returns the packed maximum values. - _mm256_
max_ ⚠epu32 (x86 or x86-64) and avx2
- Compares packed unsigned 32-bit integers in
a
andb
, and returns the packed maximum values. - _mm256_
max_ ⚠pd (x86 or x86-64) and avx
- Compares packed double-precision (64-bit) floating-point elements
in
a
andb
, and returns packed maximum values - _mm256_
max_ ⚠ps (x86 or x86-64) and avx
- Compares packed single-precision (32-bit) floating-point elements in
a
andb
, and returns packed maximum values - _mm256_
min_ ⚠epi8 (x86 or x86-64) and avx2
- Compares packed 8-bit integers in
a
andb
, and returns the packed minimum values. - _mm256_
min_ ⚠epi16 (x86 or x86-64) and avx2
- Compares packed 16-bit integers in
a
andb
, and returns the packed minimum values. - _mm256_
min_ ⚠epi32 (x86 or x86-64) and avx2
- Compares packed 32-bit integers in
a
andb
, and returns the packed minimum values. - _mm256_
min_ ⚠epu8 (x86 or x86-64) and avx2
- Compares packed unsigned 8-bit integers in
a
andb
, and returns the packed minimum values. - _mm256_
min_ ⚠epu16 (x86 or x86-64) and avx2
- Compares packed unsigned 16-bit integers in
a
andb
, and returns the packed minimum values. - _mm256_
min_ ⚠epu32 (x86 or x86-64) and avx2
- Compares packed unsigned 32-bit integers in
a
andb
, and returns the packed minimum values. - _mm256_
min_ ⚠pd (x86 or x86-64) and avx
- Compares packed double-precision (64-bit) floating-point elements
in
a
andb
, and returns packed minimum values - _mm256_
min_ ⚠ps (x86 or x86-64) and avx
- Compares packed single-precision (32-bit) floating-point elements in
a
andb
, and returns packed minimum values - _mm256_
movedup_ ⚠pd (x86 or x86-64) and avx
- Duplicate even-indexed double-precision (64-bit) floating-point elements
from
a
, and returns the results. - _mm256_
movehdup_ ⚠ps (x86 or x86-64) and avx
- Duplicate odd-indexed single-precision (32-bit) floating-point elements
from
a
, and returns the results. - _mm256_
moveldup_ ⚠ps (x86 or x86-64) and avx
- Duplicate even-indexed single-precision (32-bit) floating-point elements
from
a
, and returns the results. - _mm256_
movemask_ ⚠epi8 (x86 or x86-64) and avx2
- Creates mask from the most significant bit of each 8-bit element in
a
, return the result. - _mm256_
movemask_ ⚠pd (x86 or x86-64) and avx
- Sets each bit of the returned mask based on the most significant bit of the
corresponding packed double-precision (64-bit) floating-point element in
a
. - _mm256_
movemask_ ⚠ps (x86 or x86-64) and avx
- Sets each bit of the returned mask based on the most significant bit of the
corresponding packed single-precision (32-bit) floating-point element in
a
. - _mm256_
mpsadbw_ ⚠epu8 (x86 or x86-64) and avx2
- Computes the sum of absolute differences (SADs) of quadruplets of unsigned
8-bit integers in
a
compared to those inb
, and stores the 16-bit results in dst. Eight SADs are performed for each 128-bit lane using one quadruplet fromb
and eight quadruplets froma
. One quadruplet is selected fromb
starting at on the offset specified inimm8
. Eight quadruplets are formed from sequential 8-bit integers selected froma
starting at the offset specified inimm8
. - _mm256_
mul_ ⚠epi32 (x86 or x86-64) and avx2
- Multiplies the low 32-bit integers from each packed 64-bit element in
a
andb
- _mm256_
mul_ ⚠epu32 (x86 or x86-64) and avx2
- Multiplies the low unsigned 32-bit integers from each packed 64-bit
element in
a
andb
- _mm256_
mul_ ⚠pd (x86 or x86-64) and avx
- Multiplies packed double-precision (64-bit) floating-point elements
in
a
andb
. - _mm256_
mul_ ⚠ps (x86 or x86-64) and avx
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
. - _mm256_
mulhi_ ⚠epi16 (x86 or x86-64) and avx2
- Multiplies the packed 16-bit integers in
a
andb
, producing intermediate 32-bit integers and returning the high 16 bits of the intermediate integers. - _mm256_
mulhi_ ⚠epu16 (x86 or x86-64) and avx2
- Multiplies the packed unsigned 16-bit integers in
a
andb
, producing intermediate 32-bit integers and returning the high 16 bits of the intermediate integers. - _mm256_
mulhrs_ ⚠epi16 (x86 or x86-64) and avx2
- Multiplies packed 16-bit integers in
a
andb
, producing intermediate signed 32-bit integers. Truncate each intermediate integer to the 18 most significant bits, round by adding 1, and return bits[16:1]
. - _mm256_
mullo_ ⚠epi16 (x86 or x86-64) and avx2
- Multiplies the packed 16-bit integers in
a
andb
, producing intermediate 32-bit integers, and returns the low 16 bits of the intermediate integers - _mm256_
mullo_ ⚠epi32 (x86 or x86-64) and avx2
- Multiplies the packed 32-bit integers in
a
andb
, producing intermediate 64-bit integers, and returns the low 32 bits of the intermediate integers - _mm256_
or_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise OR packed double-precision (64-bit) floating-point
elements in
a
andb
. - _mm256_
or_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise OR packed single-precision (32-bit) floating-point
elements in
a
andb
. - _mm256_
or_ ⚠si256 (x86 or x86-64) and avx2
- Computes the bitwise OR of 256 bits (representing integer data) in
a
andb
- _mm256_
packs_ ⚠epi16 (x86 or x86-64) and avx2
- Converts packed 16-bit integers from
a
andb
to packed 8-bit integers using signed saturation - _mm256_
packs_ ⚠epi32 (x86 or x86-64) and avx2
- Converts packed 32-bit integers from
a
andb
to packed 16-bit integers using signed saturation - _mm256_
packus_ ⚠epi16 (x86 or x86-64) and avx2
- Converts packed 16-bit integers from
a
andb
to packed 8-bit integers using unsigned saturation - _mm256_
packus_ ⚠epi32 (x86 or x86-64) and avx2
- Converts packed 32-bit integers from
a
andb
to packed 16-bit integers using unsigned saturation - _mm256_
permute2f128_ ⚠pd (x86 or x86-64) and avx
- Shuffles 256 bits (composed of 4 packed double-precision (64-bit)
floating-point elements) selected by
imm8
froma
andb
. - _mm256_
permute2f128_ ⚠ps (x86 or x86-64) and avx
- Shuffles 256 bits (composed of 8 packed single-precision (32-bit)
floating-point elements) selected by
imm8
froma
andb
. - _mm256_
permute2f128_ ⚠si256 (x86 or x86-64) and avx
- Shuffles 128-bits (composed of integer data) selected by
imm8
froma
andb
. - _mm256_
permute2x128_ ⚠si256 (x86 or x86-64) and avx2
- Shuffles 128-bits of integer data selected by
imm8
froma
andb
. - _mm256_
permute4x64_ ⚠epi64 (x86 or x86-64) and avx2
- Permutes 64-bit integers from
a
using control maskimm8
. - _mm256_
permute4x64_ ⚠pd (x86 or x86-64) and avx2
- Shuffles 64-bit floating-point elements in
a
across lanes using the control inimm8
. - _mm256_
permute_ ⚠pd (x86 or x86-64) and avx
- Shuffles double-precision (64-bit) floating-point elements in
a
within 128-bit lanes using the control inimm8
. - _mm256_
permute_ ⚠ps (x86 or x86-64) and avx
- Shuffles single-precision (32-bit) floating-point elements in
a
within 128-bit lanes using the control inimm8
. - _mm256_
permutevar8x32_ ⚠epi32 (x86 or x86-64) and avx2
- Permutes packed 32-bit integers from
a
according to the content ofb
. - _mm256_
permutevar8x32_ ⚠ps (x86 or x86-64) and avx2
- Shuffles eight 32-bit floating-point elements in
a
across lanes using the corresponding 32-bit integer index inidx
. - _mm256_
permutevar_ ⚠pd (x86 or x86-64) and avx
- Shuffles double-precision (64-bit) floating-point elements in
a
within 256-bit lanes using the control inb
. - _mm256_
permutevar_ ⚠ps (x86 or x86-64) and avx
- Shuffles single-precision (32-bit) floating-point elements in
a
within 128-bit lanes using the control inb
. - _mm256_
rcp_ ⚠ps (x86 or x86-64) and avx
- Computes the approximate reciprocal of packed single-precision (32-bit)
floating-point elements in
a
, and returns the results. The maximum relative error for this approximation is less than 1.5*2^-12. - _mm256_
round_ ⚠pd (x86 or x86-64) and avx
- Rounds packed double-precision (64-bit) floating point elements in
a
according to the flagROUNDING
. The value ofROUNDING
may be as follows: - _mm256_
round_ ⚠ps (x86 or x86-64) and avx
- Rounds packed single-precision (32-bit) floating point elements in
a
according to the flagROUNDING
. The value ofROUNDING
may be as follows: - _mm256_
rsqrt_ ⚠ps (x86 or x86-64) and avx
- Computes the approximate reciprocal square root of packed single-precision
(32-bit) floating-point elements in
a
, and returns the results. The maximum relative error for this approximation is less than 1.5*2^-12. - _mm256_
sad_ ⚠epu8 (x86 or x86-64) and avx2
- Computes the absolute differences of packed unsigned 8-bit integers in
a
andb
, then horizontally sum each consecutive 8 differences to produce four unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low 16 bits of the 64-bit return value - _mm256_
set1_ ⚠epi8 (x86 or x86-64) and avx
- Broadcasts 8-bit integer
a
to all elements of returned vector. This intrinsic may generate thevpbroadcastb
. - _mm256_
set1_ ⚠epi16 (x86 or x86-64) and avx
- Broadcasts 16-bit integer
a
to all elements of returned vector. This intrinsic may generate thevpbroadcastw
. - _mm256_
set1_ ⚠epi32 (x86 or x86-64) and avx
- Broadcasts 32-bit integer
a
to all elements of returned vector. This intrinsic may generate thevpbroadcastd
. - _mm256_
set1_ ⚠epi64x (x86 or x86-64) and avx
- Broadcasts 64-bit integer
a
to all elements of returned vector. This intrinsic may generate thevpbroadcastq
. - _mm256_
set1_ ⚠pd (x86 or x86-64) and avx
- Broadcasts double-precision (64-bit) floating-point value
a
to all elements of returned vector. - _mm256_
set1_ ⚠ps (x86 or x86-64) and avx
- Broadcasts single-precision (32-bit) floating-point value
a
to all elements of returned vector. - _mm256_
set_ ⚠epi8 (x86 or x86-64) and avx
- Sets packed 8-bit integers in returned vector with the supplied values.
- _mm256_
set_ ⚠epi16 (x86 or x86-64) and avx
- Sets packed 16-bit integers in returned vector with the supplied values.
- _mm256_
set_ ⚠epi32 (x86 or x86-64) and avx
- Sets packed 32-bit integers in returned vector with the supplied values.
- _mm256_
set_ ⚠epi64x (x86 or x86-64) and avx
- Sets packed 64-bit integers in returned vector with the supplied values.
- _mm256_
set_ ⚠m128 (x86 or x86-64) and avx
- Sets packed __m256 returned vector with the supplied values.
- _mm256_
set_ ⚠m128d (x86 or x86-64) and avx
- Sets packed __m256d returned vector with the supplied values.
- _mm256_
set_ ⚠m128i (x86 or x86-64) and avx
- Sets packed __m256i returned vector with the supplied values.
- _mm256_
set_ ⚠pd (x86 or x86-64) and avx
- Sets packed double-precision (64-bit) floating-point elements in returned vector with the supplied values.
- _mm256_
set_ ⚠ps (x86 or x86-64) and avx
- Sets packed single-precision (32-bit) floating-point elements in returned vector with the supplied values.
- _mm256_
setr_ ⚠epi8 (x86 or x86-64) and avx
- Sets packed 8-bit integers in returned vector with the supplied values in reverse order.
- _mm256_
setr_ ⚠epi16 (x86 or x86-64) and avx
- Sets packed 16-bit integers in returned vector with the supplied values in reverse order.
- _mm256_
setr_ ⚠epi32 (x86 or x86-64) and avx
- Sets packed 32-bit integers in returned vector with the supplied values in reverse order.
- _mm256_
setr_ ⚠epi64x (x86 or x86-64) and avx
- Sets packed 64-bit integers in returned vector with the supplied values in reverse order.
- _mm256_
setr_ ⚠m128 (x86 or x86-64) and avx
- Sets packed __m256 returned vector with the supplied values.
- _mm256_
setr_ ⚠m128d (x86 or x86-64) and avx
- Sets packed __m256d returned vector with the supplied values.
- _mm256_
setr_ ⚠m128i (x86 or x86-64) and avx
- Sets packed __m256i returned vector with the supplied values.
- _mm256_
setr_ ⚠pd (x86 or x86-64) and avx
- Sets packed double-precision (64-bit) floating-point elements in returned vector with the supplied values in reverse order.
- _mm256_
setr_ ⚠ps (x86 or x86-64) and avx
- Sets packed single-precision (32-bit) floating-point elements in returned vector with the supplied values in reverse order.
- _mm256_
setzero_ ⚠pd (x86 or x86-64) and avx
- Returns vector of type __m256d with all elements set to zero.
- _mm256_
setzero_ ⚠ps (x86 or x86-64) and avx
- Returns vector of type __m256 with all elements set to zero.
- _mm256_
setzero_ ⚠si256 (x86 or x86-64) and avx
- Returns vector of type __m256i with all elements set to zero.
- _mm256_
shuffle_ ⚠epi8 (x86 or x86-64) and avx2
- Shuffles bytes from
a
according to the content ofb
. - _mm256_
shuffle_ ⚠epi32 (x86 or x86-64) and avx2
- Shuffles 32-bit integers in 128-bit lanes of
a
using the control inimm8
. - _mm256_
shuffle_ ⚠pd (x86 or x86-64) and avx
- Shuffles double-precision (64-bit) floating-point elements within 128-bit
lanes using the control in
imm8
. - _mm256_
shuffle_ ⚠ps (x86 or x86-64) and avx
- Shuffles single-precision (32-bit) floating-point elements in
a
within 128-bit lanes using the control inimm8
. - _mm256_
shufflehi_ ⚠epi16 (x86 or x86-64) and avx2
- Shuffles 16-bit integers in the high 64 bits of 128-bit lanes of
a
using the control inimm8
. The low 64 bits of 128-bit lanes ofa
are copied to the output. - _mm256_
shufflelo_ ⚠epi16 (x86 or x86-64) and avx2
- Shuffles 16-bit integers in the low 64 bits of 128-bit lanes of
a
using the control inimm8
. The high 64 bits of 128-bit lanes ofa
are copied to the output. - _mm256_
sign_ ⚠epi8 (x86 or x86-64) and avx2
- Negates packed 8-bit integers in
a
when the corresponding signed 8-bit integer inb
is negative, and returns the results. Results are zeroed out when the corresponding element inb
is zero. - _mm256_
sign_ ⚠epi16 (x86 or x86-64) and avx2
- Negates packed 16-bit integers in
a
when the corresponding signed 16-bit integer inb
is negative, and returns the results. Results are zeroed out when the corresponding element inb
is zero. - _mm256_
sign_ ⚠epi32 (x86 or x86-64) and avx2
- Negates packed 32-bit integers in
a
when the corresponding signed 32-bit integer inb
is negative, and returns the results. Results are zeroed out when the corresponding element inb
is zero. - _mm256_
sll_ ⚠epi16 (x86 or x86-64) and avx2
- Shifts packed 16-bit integers in
a
left bycount
while shifting in zeros, and returns the result - _mm256_
sll_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
left bycount
while shifting in zeros, and returns the result - _mm256_
sll_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
left bycount
while shifting in zeros, and returns the result - _mm256_
slli_ ⚠epi16 (x86 or x86-64) and avx2
- Shifts packed 16-bit integers in
a
left byIMM8
while shifting in zeros, return the results; - _mm256_
slli_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
left byIMM8
while shifting in zeros, return the results; - _mm256_
slli_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
left byIMM8
while shifting in zeros, return the results; - _mm256_
slli_ ⚠si256 (x86 or x86-64) and avx2
- Shifts 128-bit lanes in
a
left byimm8
bytes while shifting in zeros. - _mm256_
sllv_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
left by the amount specified by the corresponding element incount
while shifting in zeros, and returns the result. - _mm256_
sllv_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
left by the amount specified by the corresponding element incount
while shifting in zeros, and returns the result. - _mm256_
sqrt_ ⚠pd (x86 or x86-64) and avx
- Returns the square root of packed double-precision (64-bit) floating point
elements in
a
. - _mm256_
sqrt_ ⚠ps (x86 or x86-64) and avx
- Returns the square root of packed single-precision (32-bit) floating point
elements in
a
. - _mm256_
sra_ ⚠epi16 (x86 or x86-64) and avx2
- Shifts packed 16-bit integers in
a
right bycount
while shifting in sign bits. - _mm256_
sra_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right bycount
while shifting in sign bits. - _mm256_
srai_ ⚠epi16 (x86 or x86-64) and avx2
- Shifts packed 16-bit integers in
a
right byIMM8
while shifting in sign bits. - _mm256_
srai_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right byIMM8
while shifting in sign bits. - _mm256_
srav_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right by the amount specified by the corresponding element incount
while shifting in sign bits. - _mm256_
srl_ ⚠epi16 (x86 or x86-64) and avx2
- Shifts packed 16-bit integers in
a
right bycount
while shifting in zeros. - _mm256_
srl_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right bycount
while shifting in zeros. - _mm256_
srl_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
right bycount
while shifting in zeros. - _mm256_
srli_ ⚠epi16 (x86 or x86-64) and avx2
- Shifts packed 16-bit integers in
a
right byIMM8
while shifting in zeros - _mm256_
srli_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right byIMM8
while shifting in zeros - _mm256_
srli_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
right byIMM8
while shifting in zeros - _mm256_
srli_ ⚠si256 (x86 or x86-64) and avx2
- Shifts 128-bit lanes in
a
right byimm8
bytes while shifting in zeros. - _mm256_
srlv_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right by the amount specified by the corresponding element incount
while shifting in zeros, - _mm256_
srlv_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
right by the amount specified by the corresponding element incount
while shifting in zeros, - _mm256_
store_ ⚠pd (x86 or x86-64) and avx
- Stores 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from
a
into memory.mem_addr
must be aligned on a 32-byte boundary or a general-protection exception may be generated. - _mm256_
store_ ⚠ps (x86 or x86-64) and avx
- Stores 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from
a
into memory.mem_addr
must be aligned on a 32-byte boundary or a general-protection exception may be generated. - _mm256_
store_ ⚠si256 (x86 or x86-64) and avx
- Stores 256-bits of integer data from
a
into memory.mem_addr
must be aligned on a 32-byte boundary or a general-protection exception may be generated. - _mm256_
storeu2_ ⚠m128 (x86 or x86-64) and avx
- Stores the high and low 128-bit halves (each composed of 4 packed
single-precision (32-bit) floating-point elements) from
a
into memory two different 128-bit locations.hiaddr
andloaddr
do not need to be aligned on any particular boundary. - _mm256_
storeu2_ ⚠m128d (x86 or x86-64) and avx
- Stores the high and low 128-bit halves (each composed of 2 packed
double-precision (64-bit) floating-point elements) from
a
into memory two different 128-bit locations.hiaddr
andloaddr
do not need to be aligned on any particular boundary. - _mm256_
storeu2_ ⚠m128i (x86 or x86-64) and avx
- Stores the high and low 128-bit halves (each composed of integer data) from
a
into memory two different 128-bit locations.hiaddr
andloaddr
do not need to be aligned on any particular boundary. - _mm256_
storeu_ ⚠pd (x86 or x86-64) and avx
- Stores 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from
a
into memory.mem_addr
does not need to be aligned on any particular boundary. - _mm256_
storeu_ ⚠ps (x86 or x86-64) and avx
- Stores 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from
a
into memory.mem_addr
does not need to be aligned on any particular boundary. - _mm256_
storeu_ ⚠si256 (x86 or x86-64) and avx
- Stores 256-bits of integer data from
a
into memory.mem_addr
does not need to be aligned on any particular boundary. - _mm256_
stream_ ⚠load_ si256 (x86 or x86-64) and avx2
- Load 256-bits of integer data from memory into dst using a non-temporal memory hint. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon)
- _mm256_
stream_ ⚠pd (x86 or x86-64) and avx
- Moves double-precision values from a 256-bit vector of
[4 x double]
to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon). - _mm256_
stream_ ⚠ps (x86 or x86-64) and avx
- Moves single-precision floating point values from a 256-bit vector
of
[8 x float]
to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon). - _mm256_
stream_ ⚠si256 (x86 or x86-64) and avx
- Moves integer data from a 256-bit integer vector to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon)
- _mm256_
sub_ ⚠epi8 (x86 or x86-64) and avx2
- Subtract packed 8-bit integers in
b
from packed 8-bit integers ina
- _mm256_
sub_ ⚠epi16 (x86 or x86-64) and avx2
- Subtract packed 16-bit integers in
b
from packed 16-bit integers ina
- _mm256_
sub_ ⚠epi32 (x86 or x86-64) and avx2
- Subtract packed 32-bit integers in
b
from packed 32-bit integers ina
- _mm256_
sub_ ⚠epi64 (x86 or x86-64) and avx2
- Subtract packed 64-bit integers in
b
from packed 64-bit integers ina
- _mm256_
sub_ ⚠pd (x86 or x86-64) and avx
- Subtracts packed double-precision (64-bit) floating-point elements in
b
from packed elements ina
. - _mm256_
sub_ ⚠ps (x86 or x86-64) and avx
- Subtracts packed single-precision (32-bit) floating-point elements in
b
from packed elements ina
. - _mm256_
subs_ ⚠epi8 (x86 or x86-64) and avx2
- Subtract packed 8-bit integers in
b
from packed 8-bit integers ina
using saturation. - _mm256_
subs_ ⚠epi16 (x86 or x86-64) and avx2
- Subtract packed 16-bit integers in
b
from packed 16-bit integers ina
using saturation. - _mm256_
subs_ ⚠epu8 (x86 or x86-64) and avx2
- Subtract packed unsigned 8-bit integers in
b
from packed 8-bit integers ina
using saturation. - _mm256_
subs_ ⚠epu16 (x86 or x86-64) and avx2
- Subtract packed unsigned 16-bit integers in
b
from packed 16-bit integers ina
using saturation. - _mm256_
testc_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing double-precision (64-bit)
floating-point elements) in
a
andb
, producing an intermediate 256-bit value, and setZF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theCF
value. - _mm256_
testc_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing single-precision (32-bit)
floating-point elements) in
a
andb
, producing an intermediate 256-bit value, and setZF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theCF
value. - _mm256_
testc_ ⚠si256 (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing integer data) in
a
andb
, and setZF
to 1 if the result is zero, otherwise setZF
to 0. Computes the bitwise NOT ofa
and then AND withb
, and setCF
to 1 if the result is zero, otherwise setCF
to 0. Return theCF
value. - _mm256_
testnzc_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing double-precision (64-bit)
floating-point elements) in
a
andb
, producing an intermediate 256-bit value, and setZF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setCF
to 0. Return 1 if both theZF
andCF
values are zero, otherwise return 0. - _mm256_
testnzc_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing single-precision (32-bit)
floating-point elements) in
a
andb
, producing an intermediate 256-bit value, and setZF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setCF
to 0. Return 1 if both theZF
andCF
values are zero, otherwise return 0. - _mm256_
testnzc_ ⚠si256 (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing integer data) in
a
andb
, and setZF
to 1 if the result is zero, otherwise setZF
to 0. Computes the bitwise NOT ofa
and then AND withb
, and setCF
to 1 if the result is zero, otherwise setCF
to 0. Return 1 if both theZF
andCF
values are zero, otherwise return 0. - _mm256_
testz_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing double-precision (64-bit)
floating-point elements) in
a
andb
, producing an intermediate 256-bit value, and setZF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theZF
value. - _mm256_
testz_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing single-precision (32-bit)
floating-point elements) in
a
andb
, producing an intermediate 256-bit value, and setZF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theZF
value. - _mm256_
testz_ ⚠si256 (x86 or x86-64) and avx
- Computes the bitwise AND of 256 bits (representing integer data) in
a
andb
, and setZF
to 1 if the result is zero, otherwise setZF
to 0. Computes the bitwise NOT ofa
and then AND withb
, and setCF
to 1 if the result is zero, otherwise setCF
to 0. Return theZF
value. - _mm256_
undefined_ ⚠pd (x86 or x86-64) and avx
- Returns vector of type
__m256d
with indeterminate elements. Despite being “undefined”, this is some valid value and not equivalent tomem::MaybeUninit
. In practice, this is equivalent tomem::zeroed
. - _mm256_
undefined_ ⚠ps (x86 or x86-64) and avx
- Returns vector of type
__m256
with indeterminate elements. Despite being “undefined”, this is some valid value and not equivalent tomem::MaybeUninit
. In practice, this is equivalent tomem::zeroed
. - _mm256_
undefined_ ⚠si256 (x86 or x86-64) and avx
- Returns vector of type __m256i with with indeterminate elements.
Despite being “undefined”, this is some valid value and not equivalent to
mem::MaybeUninit
. In practice, this is equivalent tomem::zeroed
. - _mm256_
unpackhi_ ⚠epi8 (x86 or x86-64) and avx2
- Unpacks and interleave 8-bit integers from the high half of each
128-bit lane in
a
andb
. - _mm256_
unpackhi_ ⚠epi16 (x86 or x86-64) and avx2
- Unpacks and interleave 16-bit integers from the high half of each
128-bit lane of
a
andb
. - _mm256_
unpackhi_ ⚠epi32 (x86 or x86-64) and avx2
- Unpacks and interleave 32-bit integers from the high half of each
128-bit lane of
a
andb
. - _mm256_
unpackhi_ ⚠epi64 (x86 or x86-64) and avx2
- Unpacks and interleave 64-bit integers from the high half of each
128-bit lane of
a
andb
. - _mm256_
unpackhi_ ⚠pd (x86 or x86-64) and avx
- Unpacks and interleave double-precision (64-bit) floating-point elements
from the high half of each 128-bit lane in
a
andb
. - _mm256_
unpackhi_ ⚠ps (x86 or x86-64) and avx
- Unpacks and interleave single-precision (32-bit) floating-point elements
from the high half of each 128-bit lane in
a
andb
. - _mm256_
unpacklo_ ⚠epi8 (x86 or x86-64) and avx2
- Unpacks and interleave 8-bit integers from the low half of each
128-bit lane of
a
andb
. - _mm256_
unpacklo_ ⚠epi16 (x86 or x86-64) and avx2
- Unpacks and interleave 16-bit integers from the low half of each
128-bit lane of
a
andb
. - _mm256_
unpacklo_ ⚠epi32 (x86 or x86-64) and avx2
- Unpacks and interleave 32-bit integers from the low half of each
128-bit lane of
a
andb
. - _mm256_
unpacklo_ ⚠epi64 (x86 or x86-64) and avx2
- Unpacks and interleave 64-bit integers from the low half of each
128-bit lane of
a
andb
. - _mm256_
unpacklo_ ⚠pd (x86 or x86-64) and avx
- Unpacks and interleave double-precision (64-bit) floating-point elements
from the low half of each 128-bit lane in
a
andb
. - _mm256_
unpacklo_ ⚠ps (x86 or x86-64) and avx
- Unpacks and interleave single-precision (32-bit) floating-point elements
from the low half of each 128-bit lane in
a
andb
. - _mm256_
xor_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise XOR of packed double-precision (64-bit) floating-point
elements in
a
andb
. - _mm256_
xor_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise XOR of packed single-precision (32-bit) floating-point
elements in
a
andb
. - _mm256_
xor_ ⚠si256 (x86 or x86-64) and avx2
- Computes the bitwise XOR of 256 bits (representing integer data)
in
a
andb
- _mm256_
zeroall ⚠(x86 or x86-64) and avx
- Zeroes the contents of all XMM or YMM registers.
- _mm256_
zeroupper ⚠(x86 or x86-64) and avx
- Zeroes the upper 128 bits of all YMM registers; the lower 128-bits of the registers are unmodified.
- _mm256_
zextpd128_ ⚠pd256 (x86 or x86-64) and avx
- Constructs a 256-bit floating-point vector of
[4 x double]
from a 128-bit floating-point vector of[2 x double]
. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero. - _mm256_
zextps128_ ⚠ps256 (x86 or x86-64) and avx
- Constructs a 256-bit floating-point vector of
[8 x float]
from a 128-bit floating-point vector of[4 x float]
. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero. - _mm256_
zextsi128_ ⚠si256 (x86 or x86-64) and avx
- Constructs a 256-bit integer vector from a 128-bit integer vector. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.
- _mm_
abs_ ⚠epi8 (x86 or x86-64) and ssse3
- Computes the absolute value of packed 8-bit signed integers in
a
and return the unsigned results. - _mm_
abs_ ⚠epi16 (x86 or x86-64) and ssse3
- Computes the absolute value of each of the packed 16-bit signed integers in
a
and return the 16-bit unsigned integer - _mm_
abs_ ⚠epi32 (x86 or x86-64) and ssse3
- Computes the absolute value of each of the packed 32-bit signed integers in
a
and return the 32-bit unsigned integer - _mm_
add_ ⚠epi8 (x86 or x86-64) and sse2
- Adds packed 8-bit integers in
a
andb
. - _mm_
add_ ⚠epi16 (x86 or x86-64) and sse2
- Adds packed 16-bit integers in
a
andb
. - _mm_
add_ ⚠epi32 (x86 or x86-64) and sse2
- Adds packed 32-bit integers in
a
andb
. - _mm_
add_ ⚠epi64 (x86 or x86-64) and sse2
- Adds packed 64-bit integers in
a
andb
. - _mm_
add_ ⚠pd (x86 or x86-64) and sse2
- Adds packed double-precision (64-bit) floating-point elements in
a
andb
. - _mm_
add_ ⚠ps (x86 or x86-64) and sse
- Adds packed single-precision (32-bit) floating-point elements in
a
andb
. - _mm_
add_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the sum of the low elements ofa
andb
. - _mm_
add_ ⚠ss (x86 or x86-64) and sse
- Adds the first component of
a
andb
, the other components are copied froma
. - _mm_
adds_ ⚠epi8 (x86 or x86-64) and sse2
- Adds packed 8-bit integers in
a
andb
using saturation. - _mm_
adds_ ⚠epi16 (x86 or x86-64) and sse2
- Adds packed 16-bit integers in
a
andb
using saturation. - _mm_
adds_ ⚠epu8 (x86 or x86-64) and sse2
- Adds packed unsigned 8-bit integers in
a
andb
using saturation. - _mm_
adds_ ⚠epu16 (x86 or x86-64) and sse2
- Adds packed unsigned 16-bit integers in
a
andb
using saturation. - _mm_
addsub_ ⚠pd (x86 or x86-64) and sse3
- Alternatively add and subtract packed double-precision (64-bit)
floating-point elements in
a
to/from packed elements inb
. - _mm_
addsub_ ⚠ps (x86 or x86-64) and sse3
- Alternatively add and subtract packed single-precision (32-bit)
floating-point elements in
a
to/from packed elements inb
. - _mm_
aesdec_ ⚠si128 (x86 or x86-64) and aes
- Performs one round of an AES decryption flow on data (state) in
a
. - _mm_
aesdeclast_ ⚠si128 (x86 or x86-64) and aes
- Performs the last round of an AES decryption flow on data (state) in
a
. - _mm_
aesenc_ ⚠si128 (x86 or x86-64) and aes
- Performs one round of an AES encryption flow on data (state) in
a
. - _mm_
aesenclast_ ⚠si128 (x86 or x86-64) and aes
- Performs the last round of an AES encryption flow on data (state) in
a
. - _mm_
aesimc_ ⚠si128 (x86 or x86-64) and aes
- Performs the
InvMixColumns
transformation ona
. - _mm_
aeskeygenassist_ ⚠si128 (x86 or x86-64) and aes
- Assist in expanding the AES cipher key.
- _mm_
alignr_ ⚠epi8 (x86 or x86-64) and ssse3
- Concatenate 16-byte blocks in
a
andb
into a 32-byte temporary result, shift the result right byn
bytes, and returns the low 16 bytes. - _mm_
and_ ⚠pd (x86 or x86-64) and sse2
- Computes the bitwise AND of packed double-precision (64-bit) floating-point
elements in
a
andb
. - _mm_
and_ ⚠ps (x86 or x86-64) and sse
- Bitwise AND of packed single-precision (32-bit) floating-point elements.
- _mm_
and_ ⚠si128 (x86 or x86-64) and sse2
- Computes the bitwise AND of 128 bits (representing integer data) in
a
andb
. - _mm_
andnot_ ⚠pd (x86 or x86-64) and sse2
- Computes the bitwise NOT of
a
and then AND withb
. - _mm_
andnot_ ⚠ps (x86 or x86-64) and sse
- Bitwise AND-NOT of packed single-precision (32-bit) floating-point elements.
- _mm_
andnot_ ⚠si128 (x86 or x86-64) and sse2
- Computes the bitwise NOT of 128 bits (representing integer data) in
a
and then AND withb
. - _mm_
avg_ ⚠epu8 (x86 or x86-64) and sse2
- Averages packed unsigned 8-bit integers in
a
andb
. - _mm_
avg_ ⚠epu16 (x86 or x86-64) and sse2
- Averages packed unsigned 16-bit integers in
a
andb
. - _mm_
blend_ ⚠epi16 (x86 or x86-64) and sse4.1
- Blend packed 16-bit integers from
a
andb
using the maskIMM8
. - _mm_
blend_ ⚠epi32 (x86 or x86-64) and avx2
- Blends packed 32-bit integers from
a
andb
using control maskIMM4
. - _mm_
blend_ ⚠pd (x86 or x86-64) and sse4.1
- Blend packed double-precision (64-bit) floating-point elements from
a
andb
using control maskIMM2
- _mm_
blend_ ⚠ps (x86 or x86-64) and sse4.1
- Blend packed single-precision (32-bit) floating-point elements from
a
andb
using maskIMM4
- _mm_
blendv_ ⚠epi8 (x86 or x86-64) and sse4.1
- Blend packed 8-bit integers from
a
andb
usingmask
- _mm_
blendv_ ⚠pd (x86 or x86-64) and sse4.1
- Blend packed double-precision (64-bit) floating-point elements from
a
andb
usingmask
- _mm_
blendv_ ⚠ps (x86 or x86-64) and sse4.1
- Blend packed single-precision (32-bit) floating-point elements from
a
andb
usingmask
- _mm_
broadcast_ ⚠ss (x86 or x86-64) and avx
- Broadcasts a single-precision (32-bit) floating-point element from memory to all elements of the returned vector.
- _mm_
broadcastb_ ⚠epi8 (x86 or x86-64) and avx2
- Broadcasts the low packed 8-bit integer from
a
to all elements of the 128-bit returned value. - _mm_
broadcastd_ ⚠epi32 (x86 or x86-64) and avx2
- Broadcasts the low packed 32-bit integer from
a
to all elements of the 128-bit returned value. - _mm_
broadcastq_ ⚠epi64 (x86 or x86-64) and avx2
- Broadcasts the low packed 64-bit integer from
a
to all elements of the 128-bit returned value. - _mm_
broadcastsd_ ⚠pd (x86 or x86-64) and avx2
- Broadcasts the low double-precision (64-bit) floating-point element
from
a
to all elements of the 128-bit returned value. - _mm_
broadcastsi128_ ⚠si256 (x86 or x86-64) and avx2
- Broadcasts 128 bits of integer data from a to all 128-bit lanes in the 256-bit returned value.
- _mm_
broadcastss_ ⚠ps (x86 or x86-64) and avx2
- Broadcasts the low single-precision (32-bit) floating-point element
from
a
to all elements of the 128-bit returned value. - _mm_
broadcastw_ ⚠epi16 (x86 or x86-64) and avx2
- Broadcasts the low packed 16-bit integer from a to all elements of the 128-bit returned value
- _mm_
bslli_ ⚠si128 (x86 or x86-64) and sse2
- Shifts
a
left byIMM8
bytes while shifting in zeros. - _mm_
bsrli_ ⚠si128 (x86 or x86-64) and sse2
- Shifts
a
right byIMM8
bytes while shifting in zeros. - _mm_
castpd_ ⚠ps (x86 or x86-64) and sse2
- Casts a 128-bit floating-point vector of
[2 x double]
into a 128-bit floating-point vector of[4 x float]
. - _mm_
castpd_ ⚠si128 (x86 or x86-64) and sse2
- Casts a 128-bit floating-point vector of
[2 x double]
into a 128-bit integer vector. - _mm_
castps_ ⚠pd (x86 or x86-64) and sse2
- Casts a 128-bit floating-point vector of
[4 x float]
into a 128-bit floating-point vector of[2 x double]
. - _mm_
castps_ ⚠si128 (x86 or x86-64) and sse2
- Casts a 128-bit floating-point vector of
[4 x float]
into a 128-bit integer vector. - _mm_
castsi128_ ⚠pd (x86 or x86-64) and sse2
- Casts a 128-bit integer vector into a 128-bit floating-point vector
of
[2 x double]
. - _mm_
castsi128_ ⚠ps (x86 or x86-64) and sse2
- Casts a 128-bit integer vector into a 128-bit floating-point vector
of
[4 x float]
. - _mm_
ceil_ ⚠pd (x86 or x86-64) and sse4.1
- Round the packed double-precision (64-bit) floating-point elements in
a
up to an integer value, and stores the results as packed double-precision floating-point elements. - _mm_
ceil_ ⚠ps (x86 or x86-64) and sse4.1
- Round the packed single-precision (32-bit) floating-point elements in
a
up to an integer value, and stores the results as packed single-precision floating-point elements. - _mm_
ceil_ ⚠sd (x86 or x86-64) and sse4.1
- Round the lower double-precision (64-bit) floating-point element in
b
up to an integer value, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copies the upper element froma
to the upper element of the intrinsic result. - _mm_
ceil_ ⚠ss (x86 or x86-64) and sse4.1
- Round the lower single-precision (32-bit) floating-point element in
b
up to an integer value, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copies the upper 3 packed elements froma
to the upper elements of the intrinsic result. - _mm_
clflush ⚠(x86 or x86-64) and sse2
- Invalidates and flushes the cache line that contains
p
from all levels of the cache hierarchy. - _mm_
clmulepi64_ ⚠si128 (x86 or x86-64) and pclmulqdq
- Performs a carry-less multiplication of two 64-bit polynomials over the finite field GF(2).
- _mm_
cmp_ ⚠pd (x86 or x86-64) and avx
- Compares packed double-precision (64-bit) floating-point
elements in
a
andb
based on the comparison operand specified byIMM5
. - _mm_
cmp_ ⚠ps (x86 or x86-64) and avx
- Compares packed single-precision (32-bit) floating-point
elements in
a
andb
based on the comparison operand specified byIMM5
. - _mm_
cmp_ ⚠sd (x86 or x86-64) and avx
- Compares the lower double-precision (64-bit) floating-point element in
a
andb
based on the comparison operand specified byIMM5
, store the result in the lower element of returned vector, and copies the upper element froma
to the upper element of returned vector. - _mm_
cmp_ ⚠ss (x86 or x86-64) and avx
- Compares the lower single-precision (32-bit) floating-point element in
a
andb
based on the comparison operand specified byIMM5
, store the result in the lower element of returned vector, and copies the upper 3 packed elements froma
to the upper elements of returned vector. - _mm_
cmpeq_ ⚠epi8 (x86 or x86-64) and sse2
- Compares packed 8-bit integers in
a
andb
for equality. - _mm_
cmpeq_ ⚠epi16 (x86 or x86-64) and sse2
- Compares packed 16-bit integers in
a
andb
for equality. - _mm_
cmpeq_ ⚠epi32 (x86 or x86-64) and sse2
- Compares packed 32-bit integers in
a
andb
for equality. - _mm_
cmpeq_ ⚠epi64 (x86 or x86-64) and sse4.1
- Compares packed 64-bit integers in
a
andb
for equality - _mm_
cmpeq_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for equality. - _mm_
cmpeq_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input elements were equal, or0
otherwise. - _mm_
cmpeq_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the equality comparison of the lower elements ofa
andb
. - _mm_
cmpeq_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for equality. The lowest 32 bits of the result will be0xffffffff
if the two inputs are equal, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpestra ⚠(x86 or x86-64) and sse4.2
- Compares packed strings in
a
andb
with lengthsla
andlb
using the control inIMM8
, and return1
ifb
did not contain a null character and the resulting mask was zero, and0
otherwise. - _mm_
cmpestrc ⚠(x86 or x86-64) and sse4.2
- Compares packed strings in
a
andb
with lengthsla
andlb
using the control inIMM8
, and return1
if the resulting mask was non-zero, and0
otherwise. - _mm_
cmpestri ⚠(x86 or x86-64) and sse4.2
- Compares packed strings
a
andb
with lengthsla
andlb
using the control inIMM8
and return the generated index. Similar to_mm_cmpistri
with the exception that_mm_cmpistri
implicitly determines the length ofa
andb
. - _mm_
cmpestrm ⚠(x86 or x86-64) and sse4.2
- Compares packed strings in
a
andb
with lengthsla
andlb
using the control inIMM8
, and return the generated mask. - _mm_
cmpestro ⚠(x86 or x86-64) and sse4.2
- Compares packed strings in
a
andb
with lengthsla
andlb
using the control inIMM8
, and return bit0
of the resulting bit mask. - _mm_
cmpestrs ⚠(x86 or x86-64) and sse4.2
- Compares packed strings in
a
andb
with lengthsla
andlb
using the control inIMM8
, and return1
if any character in a was null, and0
otherwise. - _mm_
cmpestrz ⚠(x86 or x86-64) and sse4.2
- Compares packed strings in
a
andb
with lengthsla
andlb
using the control inIMM8
, and return1
if any character inb
was null, and0
otherwise. - _mm_
cmpge_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for greater-than-or-equal. - _mm_
cmpge_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is greater than or equal to the corresponding element inb
, or0
otherwise. - _mm_
cmpge_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the greater-than-or-equal comparison of the lower elements ofa
andb
. - _mm_
cmpge_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for greater than or equal. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is greater than or equalb.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpgt_ ⚠epi8 (x86 or x86-64) and sse2
- Compares packed 8-bit integers in
a
andb
for greater-than. - _mm_
cmpgt_ ⚠epi16 (x86 or x86-64) and sse2
- Compares packed 16-bit integers in
a
andb
for greater-than. - _mm_
cmpgt_ ⚠epi32 (x86 or x86-64) and sse2
- Compares packed 32-bit integers in
a
andb
for greater-than. - _mm_
cmpgt_ ⚠epi64 (x86 or x86-64) and sse4.2
- Compares packed 64-bit integers in
a
andb
for greater-than, return the results. - _mm_
cmpgt_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for greater-than. - _mm_
cmpgt_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is greater than the corresponding element inb
, or0
otherwise. - _mm_
cmpgt_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the greater-than comparison of the lower elements ofa
andb
. - _mm_
cmpgt_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for greater than. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is greater thanb.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpistra ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
, and return1
ifb
did not contain a null character and the resulting mask was zero, and0
otherwise. - _mm_
cmpistrc ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
, and return1
if the resulting mask was non-zero, and0
otherwise. - _mm_
cmpistri ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
and return the generated index. Similar to_mm_cmpestri
with the exception that_mm_cmpestri
requires the lengths ofa
andb
to be explicitly specified. - _mm_
cmpistrm ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
, and return the generated mask. - _mm_
cmpistro ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
, and return bit0
of the resulting bit mask. - _mm_
cmpistrs ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
, and returns1
if any character ina
was null, and0
otherwise. - _mm_
cmpistrz ⚠(x86 or x86-64) and sse4.2
- Compares packed strings with implicit lengths in
a
andb
using the control inIMM8
, and return1
if any character inb
was null. and0
otherwise. - _mm_
cmple_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for less-than-or-equal - _mm_
cmple_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is less than or equal to the corresponding element inb
, or0
otherwise. - _mm_
cmple_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the less-than-or-equal comparison of the lower elements ofa
andb
. - _mm_
cmple_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for less than or equal. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is less than or equalb.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmplt_ ⚠epi8 (x86 or x86-64) and sse2
- Compares packed 8-bit integers in
a
andb
for less-than. - _mm_
cmplt_ ⚠epi16 (x86 or x86-64) and sse2
- Compares packed 16-bit integers in
a
andb
for less-than. - _mm_
cmplt_ ⚠epi32 (x86 or x86-64) and sse2
- Compares packed 32-bit integers in
a
andb
for less-than. - _mm_
cmplt_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for less-than. - _mm_
cmplt_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is less than the corresponding element inb
, or0
otherwise. - _mm_
cmplt_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the less-than comparison of the lower elements ofa
andb
. - _mm_
cmplt_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for less than. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is less thanb.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpneq_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for not-equal. - _mm_
cmpneq_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input elements are not equal, or0
otherwise. - _mm_
cmpneq_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the not-equal comparison of the lower elements ofa
andb
. - _mm_
cmpneq_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for inequality. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is not equal tob.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpnge_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for not-greater-than-or-equal. - _mm_
cmpnge_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is not greater than or equal to the corresponding element inb
, or0
otherwise. - _mm_
cmpnge_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the not-greater-than-or-equal comparison of the lower elements ofa
andb
. - _mm_
cmpnge_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for not-greater-than-or-equal. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is not greater than or equal tob.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpngt_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for not-greater-than. - _mm_
cmpngt_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is not greater than the corresponding element inb
, or0
otherwise. - _mm_
cmpngt_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the not-greater-than comparison of the lower elements ofa
andb
. - _mm_
cmpngt_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for not-greater-than. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is not greater thanb.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpnle_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for not-less-than-or-equal. - _mm_
cmpnle_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is not less than or equal to the corresponding element inb
, or0
otherwise. - _mm_
cmpnle_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the not-less-than-or-equal comparison of the lower elements ofa
andb
. - _mm_
cmpnle_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for not-less-than-or-equal. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is not less than or equal tob.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpnlt_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
for not-less-than. - _mm_
cmpnlt_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. The result in the output vector will be0xffffffff
if the input element ina
is not less than the corresponding element inb
, or0
otherwise. - _mm_
cmpnlt_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the not-less-than comparison of the lower elements ofa
andb
. - _mm_
cmpnlt_ ⚠ss (x86 or x86-64) and sse
- Compares the lowest
f32
of both inputs for not-less-than. The lowest 32 bits of the result will be0xffffffff
ifa.extract(0)
is not less thanb.extract(0)
, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpord_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
to see if neither isNaN
. - _mm_
cmpord_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. Returns four floats that have one of two possible bit patterns. The element in the output vector will be0xffffffff
if the input elements ina
andb
are ordered (i.e., neither of them is a NaN), or 0 otherwise. - _mm_
cmpord_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the result of comparing both of the lower elements ofa
andb
toNaN
. If neither are equal toNaN
then0xFFFFFFFFFFFFFFFF
is used and0
otherwise. - _mm_
cmpord_ ⚠ss (x86 or x86-64) and sse
- Checks if the lowest
f32
of both inputs are ordered. The lowest 32 bits of the result will be0xffffffff
if neither ofa.extract(0)
orb.extract(0)
is a NaN, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
cmpunord_ ⚠pd (x86 or x86-64) and sse2
- Compares corresponding elements in
a
andb
to see if either isNaN
. - _mm_
cmpunord_ ⚠ps (x86 or x86-64) and sse
- Compares each of the four floats in
a
to the corresponding element inb
. Returns four floats that have one of two possible bit patterns. The element in the output vector will be0xffffffff
if the input elements ina
andb
are unordered (i.e., at least on of them is a NaN), or 0 otherwise. - _mm_
cmpunord_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the result of comparing both of the lower elements ofa
andb
toNaN
. If either is equal toNaN
then0xFFFFFFFFFFFFFFFF
is used and0
otherwise. - _mm_
cmpunord_ ⚠ss (x86 or x86-64) and sse
- Checks if the lowest
f32
of both inputs are unordered. The lowest 32 bits of the result will be0xffffffff
if any ofa.extract(0)
orb.extract(0)
is a NaN, or0
otherwise. The upper 96 bits of the result are the upper 96 bits ofa
. - _mm_
comieq_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for equality. - _mm_
comieq_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if they are equal, or0
otherwise. - _mm_
comige_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for greater-than-or-equal. - _mm_
comige_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is greater than or equal to the one fromb
, or0
otherwise. - _mm_
comigt_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for greater-than. - _mm_
comigt_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is greater than the one fromb
, or0
otherwise. - _mm_
comile_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for less-than-or-equal. - _mm_
comile_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is less than or equal to the one fromb
, or0
otherwise. - _mm_
comilt_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for less-than. - _mm_
comilt_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is less than the one fromb
, or0
otherwise. - _mm_
comineq_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for not-equal. - _mm_
comineq_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if they are not equal, or0
otherwise. - _mm_
crc32_ ⚠u8 (x86 or x86-64) and sse4.2
- Starting with the initial value in
crc
, return the accumulated CRC32-C value for unsigned 8-bit integerv
. - _mm_
crc32_ ⚠u16 (x86 or x86-64) and sse4.2
- Starting with the initial value in
crc
, return the accumulated CRC32-C value for unsigned 16-bit integerv
. - _mm_
crc32_ ⚠u32 (x86 or x86-64) and sse4.2
- Starting with the initial value in
crc
, return the accumulated CRC32-C value for unsigned 32-bit integerv
. - _mm_
cvt_ ⚠si2ss (x86 or x86-64) and sse
- Alias for
_mm_cvtsi32_ss
. - _mm_
cvt_ ⚠ss2si (x86 or x86-64) and sse
- Alias for
_mm_cvtss_si32
. - _mm_
cvtepi8_ ⚠epi16 (x86 or x86-64) and sse4.1
- Sign extend packed 8-bit integers in
a
to packed 16-bit integers - _mm_
cvtepi8_ ⚠epi32 (x86 or x86-64) and sse4.1
- Sign extend packed 8-bit integers in
a
to packed 32-bit integers - _mm_
cvtepi8_ ⚠epi64 (x86 or x86-64) and sse4.1
- Sign extend packed 8-bit integers in the low 8 bytes of
a
to packed 64-bit integers - _mm_
cvtepi16_ ⚠epi32 (x86 or x86-64) and sse4.1
- Sign extend packed 16-bit integers in
a
to packed 32-bit integers - _mm_
cvtepi16_ ⚠epi64 (x86 or x86-64) and sse4.1
- Sign extend packed 16-bit integers in
a
to packed 64-bit integers - _mm_
cvtepi32_ ⚠epi64 (x86 or x86-64) and sse4.1
- Sign extend packed 32-bit integers in
a
to packed 64-bit integers - _mm_
cvtepi32_ ⚠pd (x86 or x86-64) and sse2
- Converts the lower two packed 32-bit integers in
a
to packed double-precision (64-bit) floating-point elements. - _mm_
cvtepi32_ ⚠ps (x86 or x86-64) and sse2
- Converts packed 32-bit integers in
a
to packed single-precision (32-bit) floating-point elements. - _mm_
cvtepu8_ ⚠epi16 (x86 or x86-64) and sse4.1
- Zeroes extend packed unsigned 8-bit integers in
a
to packed 16-bit integers - _mm_
cvtepu8_ ⚠epi32 (x86 or x86-64) and sse4.1
- Zeroes extend packed unsigned 8-bit integers in
a
to packed 32-bit integers - _mm_
cvtepu8_ ⚠epi64 (x86 or x86-64) and sse4.1
- Zeroes extend packed unsigned 8-bit integers in
a
to packed 64-bit integers - _mm_
cvtepu16_ ⚠epi32 (x86 or x86-64) and sse4.1
- Zeroes extend packed unsigned 16-bit integers in
a
to packed 32-bit integers - _mm_
cvtepu16_ ⚠epi64 (x86 or x86-64) and sse4.1
- Zeroes extend packed unsigned 16-bit integers in
a
to packed 64-bit integers - _mm_
cvtepu32_ ⚠epi64 (x86 or x86-64) and sse4.1
- Zeroes extend packed unsigned 32-bit integers in
a
to packed 64-bit integers - _mm_
cvtpd_ ⚠epi32 (x86 or x86-64) and sse2
- Converts packed double-precision (64-bit) floating-point elements in
a
to packed 32-bit integers. - _mm_
cvtpd_ ⚠ps (x86 or x86-64) and sse2
- Converts packed double-precision (64-bit) floating-point elements in
a
to packed single-precision (32-bit) floating-point elements - _mm_
cvtph_ ⚠ps (x86 or x86-64) and f16c
- Converts the 4 x 16-bit half-precision float values in the lowest 64-bit of
the 128-bit vector
a
into 4 x 32-bit float values stored in a 128-bit wide vector. - _mm_
cvtps_ ⚠epi32 (x86 or x86-64) and sse2
- Converts packed single-precision (32-bit) floating-point elements in
a
to packed 32-bit integers. - _mm_
cvtps_ ⚠pd (x86 or x86-64) and sse2
- Converts packed single-precision (32-bit) floating-point elements in
a
to packed double-precision (64-bit) floating-point elements. - _mm_
cvtps_ ⚠ph (x86 or x86-64) and f16c
- Converts the 4 x 32-bit float values in the 128-bit vector
a
into 4 x 16-bit half-precision float values stored in the lowest 64-bit of a 128-bit vector. - _mm_
cvtsd_ ⚠f64 (x86 or x86-64) and sse2
- Returns the lower double-precision (64-bit) floating-point element of
a
. - _mm_
cvtsd_ ⚠si32 (x86 or x86-64) and sse2
- Converts the lower double-precision (64-bit) floating-point element in a to a 32-bit integer.
- _mm_
cvtsd_ ⚠ss (x86 or x86-64) and sse2
- Converts the lower double-precision (64-bit) floating-point element in
b
to a single-precision (32-bit) floating-point element, store the result in the lower element of the return value, and copies the upper element froma
to the upper element the return value. - _mm_
cvtsi32_ ⚠sd (x86 or x86-64) and sse2
- Returns
a
with its lower element replaced byb
after converting it to anf64
. - _mm_
cvtsi32_ ⚠si128 (x86 or x86-64) and sse2
- Returns a vector whose lowest element is
a
and all higher elements are0
. - _mm_
cvtsi32_ ⚠ss (x86 or x86-64) and sse
- Converts a 32 bit integer to a 32 bit float. The result vector is the input
vector
a
with the lowest 32 bit float replaced by the converted integer. - _mm_
cvtsi128_ ⚠si32 (x86 or x86-64) and sse2
- Returns the lowest element of
a
. - _mm_
cvtss_ ⚠f32 (x86 or x86-64) and sse
- Extracts the lowest 32 bit float from the input vector.
- _mm_
cvtss_ ⚠sd (x86 or x86-64) and sse2
- Converts the lower single-precision (32-bit) floating-point element in
b
to a double-precision (64-bit) floating-point element, store the result in the lower element of the return value, and copies the upper element froma
to the upper element the return value. - _mm_
cvtss_ ⚠si32 (x86 or x86-64) and sse
- Converts the lowest 32 bit float in the input vector to a 32 bit integer.
- _mm_
cvtt_ ⚠ss2si (x86 or x86-64) and sse
- Alias for
_mm_cvttss_si32
. - _mm_
cvttpd_ ⚠epi32 (x86 or x86-64) and sse2
- Converts packed double-precision (64-bit) floating-point elements in
a
to packed 32-bit integers with truncation. - _mm_
cvttps_ ⚠epi32 (x86 or x86-64) and sse2
- Converts packed single-precision (32-bit) floating-point elements in
a
to packed 32-bit integers with truncation. - _mm_
cvttsd_ ⚠si32 (x86 or x86-64) and sse2
- Converts the lower double-precision (64-bit) floating-point element in
a
to a 32-bit integer with truncation. - _mm_
cvttss_ ⚠si32 (x86 or x86-64) and sse
- Converts the lowest 32 bit float in the input vector to a 32 bit integer with truncation.
- _mm_
div_ ⚠pd (x86 or x86-64) and sse2
- Divide packed double-precision (64-bit) floating-point elements in
a
by packed elements inb
. - _mm_
div_ ⚠ps (x86 or x86-64) and sse
- Divides packed single-precision (32-bit) floating-point elements in
a
andb
. - _mm_
div_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the result of diving the lower element ofa
by the lower element ofb
. - _mm_
div_ ⚠ss (x86 or x86-64) and sse
- Divides the first component of
b
bya
, the other components are copied froma
. - _mm_
dp_ ⚠pd (x86 or x86-64) and sse4.1
- Returns the dot product of two __m128d vectors.
- _mm_
dp_ ⚠ps (x86 or x86-64) and sse4.1
- Returns the dot product of two __m128 vectors.
- _mm_
extract_ ⚠epi8 (x86 or x86-64) and sse4.1
- Extracts an 8-bit integer from
a
, selected withIMM8
. Returns a 32-bit integer containing the zero-extended integer data. - _mm_
extract_ ⚠epi16 (x86 or x86-64) and sse2
- Returns the
imm8
element ofa
. - _mm_
extract_ ⚠epi32 (x86 or x86-64) and sse4.1
- Extracts an 32-bit integer from
a
selected withIMM8
- _mm_
extract_ ⚠ps (x86 or x86-64) and sse4.1
- Extracts a single-precision (32-bit) floating-point element from
a
, selected withIMM8
. The returnedi32
stores the float’s bit-pattern, and may be converted back to a floating point number via casting. - _mm_
extract_ ⚠si64 (x86 or x86-64) and sse4a
- Extracts the bit range specified by
y
from the lower 64 bits ofx
. - _mm_
extracti_ ⚠si64 (x86 or x86-64) and sse4a
- Extracts the specified bits from the lower 64 bits of the 128-bit integer vector operand at the
index
idx
and of the lengthlen
. - _mm_
floor_ ⚠pd (x86 or x86-64) and sse4.1
- Round the packed double-precision (64-bit) floating-point elements in
a
down to an integer value, and stores the results as packed double-precision floating-point elements. - _mm_
floor_ ⚠ps (x86 or x86-64) and sse4.1
- Round the packed single-precision (32-bit) floating-point elements in
a
down to an integer value, and stores the results as packed single-precision floating-point elements. - _mm_
floor_ ⚠sd (x86 or x86-64) and sse4.1
- Round the lower double-precision (64-bit) floating-point element in
b
down to an integer value, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copies the upper element froma
to the upper element of the intrinsic result. - _mm_
floor_ ⚠ss (x86 or x86-64) and sse4.1
- Round the lower single-precision (32-bit) floating-point element in
b
down to an integer value, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copies the upper 3 packed elements froma
to the upper elements of the intrinsic result. - _mm_
fmadd_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and add the intermediate result to packed elements inc
. - _mm_
fmadd_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and add the intermediate result to packed elements inc
. - _mm_
fmadd_ ⚠sd (x86 or x86-64) and fma
- Multiplies the lower double-precision (64-bit) floating-point elements in
a
andb
, and add the intermediate result to the lower element inc
. Stores the result in the lower element of the returned value, and copy the upper element froma
to the upper elements of the result. - _mm_
fmadd_ ⚠ss (x86 or x86-64) and fma
- Multiplies the lower single-precision (32-bit) floating-point elements in
a
andb
, and add the intermediate result to the lower element inc
. Stores the result in the lower element of the returned value, and copy the 3 upper elements froma
to the upper elements of the result. - _mm_
fmaddsub_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and alternatively add and subtract packed elements inc
to/from the intermediate result. - _mm_
fmaddsub_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and alternatively add and subtract packed elements inc
to/from the intermediate result. - _mm_
fmsub_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the intermediate result. - _mm_
fmsub_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the intermediate result. - _mm_
fmsub_ ⚠sd (x86 or x86-64) and fma
- Multiplies the lower double-precision (64-bit) floating-point elements in
a
andb
, and subtract the lower element inc
from the intermediate result. Store the result in the lower element of the returned value, and copy the upper element froma
to the upper elements of the result. - _mm_
fmsub_ ⚠ss (x86 or x86-64) and fma
- Multiplies the lower single-precision (32-bit) floating-point elements in
a
andb
, and subtract the lower element inc
from the intermediate result. Store the result in the lower element of the returned value, and copy the 3 upper elements froma
to the upper elements of the result. - _mm_
fmsubadd_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and alternatively subtract and add packed elements inc
from/to the intermediate result. - _mm_
fmsubadd_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and alternatively subtract and add packed elements inc
from/to the intermediate result. - _mm_
fnmadd_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and add the negated intermediate result to packed elements inc
. - _mm_
fnmadd_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and add the negated intermediate result to packed elements inc
. - _mm_
fnmadd_ ⚠sd (x86 or x86-64) and fma
- Multiplies the lower double-precision (64-bit) floating-point elements in
a
andb
, and add the negated intermediate result to the lower element inc
. Store the result in the lower element of the returned value, and copy the upper element froma
to the upper elements of the result. - _mm_
fnmadd_ ⚠ss (x86 or x86-64) and fma
- Multiplies the lower single-precision (32-bit) floating-point elements in
a
andb
, and add the negated intermediate result to the lower element inc
. Store the result in the lower element of the returned value, and copy the 3 upper elements froma
to the upper elements of the result. - _mm_
fnmsub_ ⚠pd (x86 or x86-64) and fma
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the negated intermediate result. - _mm_
fnmsub_ ⚠ps (x86 or x86-64) and fma
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the negated intermediate result. - _mm_
fnmsub_ ⚠sd (x86 or x86-64) and fma
- Multiplies the lower double-precision (64-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the negated intermediate result. Store the result in the lower element of the returned value, and copy the upper element froma
to the upper elements of the result. - _mm_
fnmsub_ ⚠ss (x86 or x86-64) and fma
- Multiplies the lower single-precision (32-bit) floating-point elements in
a
andb
, and subtract packed elements inc
from the negated intermediate result. Store the result in the lower element of the returned value, and copy the 3 upper elements froma
to the upper elements of the result. - _mm_
getcsr ⚠Deprecated (x86 or x86-64) and sse
- Gets the unsigned 32-bit value of the MXCSR control and status register.
- _mm_
hadd_ ⚠epi16 (x86 or x86-64) and ssse3
- Horizontally adds the adjacent pairs of values contained in 2 packed
128-bit vectors of
[8 x i16]
. - _mm_
hadd_ ⚠epi32 (x86 or x86-64) and ssse3
- Horizontally adds the adjacent pairs of values contained in 2 packed
128-bit vectors of
[4 x i32]
. - _mm_
hadd_ ⚠pd (x86 or x86-64) and sse3
- Horizontally adds adjacent pairs of double-precision (64-bit)
floating-point elements in
a
andb
, and pack the results. - _mm_
hadd_ ⚠ps (x86 or x86-64) and sse3
- Horizontally adds adjacent pairs of single-precision (32-bit)
floating-point elements in
a
andb
, and pack the results. - _mm_
hadds_ ⚠epi16 (x86 or x86-64) and ssse3
- Horizontally adds the adjacent pairs of values contained in 2 packed
128-bit vectors of
[8 x i16]
. Positive sums greater than 7FFFh are saturated to 7FFFh. Negative sums less than 8000h are saturated to 8000h. - _mm_
hsub_ ⚠epi16 (x86 or x86-64) and ssse3
- Horizontally subtract the adjacent pairs of values contained in 2
packed 128-bit vectors of
[8 x i16]
. - _mm_
hsub_ ⚠epi32 (x86 or x86-64) and ssse3
- Horizontally subtract the adjacent pairs of values contained in 2
packed 128-bit vectors of
[4 x i32]
. - _mm_
hsub_ ⚠pd (x86 or x86-64) and sse3
- Horizontally subtract adjacent pairs of double-precision (64-bit)
floating-point elements in
a
andb
, and pack the results. - _mm_
hsub_ ⚠ps (x86 or x86-64) and sse3
- Horizontally adds adjacent pairs of single-precision (32-bit)
floating-point elements in
a
andb
, and pack the results. - _mm_
hsubs_ ⚠epi16 (x86 or x86-64) and ssse3
- Horizontally subtract the adjacent pairs of values contained in 2
packed 128-bit vectors of
[8 x i16]
. Positive differences greater than 7FFFh are saturated to 7FFFh. Negative differences less than 8000h are saturated to 8000h. - _mm_
i32gather_ ⚠epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i32gather_ ⚠epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i32gather_ ⚠pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i32gather_ ⚠ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i64gather_ ⚠epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i64gather_ ⚠epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i64gather_ ⚠pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
i64gather_ ⚠ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. - _mm_
insert_ ⚠epi8 (x86 or x86-64) and sse4.1
- Returns a copy of
a
with the 8-bit integer fromi
inserted at a location specified byIMM8
. - _mm_
insert_ ⚠epi16 (x86 or x86-64) and sse2
- Returns a new vector where the
imm8
element ofa
is replaced withi
. - _mm_
insert_ ⚠epi32 (x86 or x86-64) and sse4.1
- Returns a copy of
a
with the 32-bit integer fromi
inserted at a location specified byIMM8
. - _mm_
insert_ ⚠ps (x86 or x86-64) and sse4.1
- Select a single value in
b
to store at some position ina
, Then zero elements according toIMM8
. - _mm_
insert_ ⚠si64 (x86 or x86-64) and sse4a
- Inserts the
[length:0]
bits ofy
intox
atindex
. - _mm_
inserti_ ⚠si64 (x86 or x86-64) and sse4a
- Inserts the
len
least-significant bits from the lower 64 bits of the 128-bit integer vector operandy
into the lower 64 bits of the 128-bit integer vector operandx
at the indexidx
and of the lengthlen
. - _mm_
lddqu_ ⚠si128 (x86 or x86-64) and sse3
- Loads 128-bits of integer data from unaligned memory.
This intrinsic may perform better than
_mm_loadu_si128
when the data crosses a cache line boundary. - _mm_
lfence ⚠(x86 or x86-64) and sse2
- Performs a serializing operation on all load-from-memory instructions that were issued prior to this instruction.
- _mm_
load1_ ⚠pd (x86 or x86-64) and sse2
- Loads a double-precision (64-bit) floating-point element from memory into both elements of returned vector.
- _mm_
load1_ ⚠ps (x86 or x86-64) and sse
- Construct a
__m128
by duplicating the value read fromp
into all elements. - _mm_
load_ ⚠pd (x86 or x86-64) and sse2
- Loads 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from memory into the returned vector.
mem_addr
must be aligned on a 16-byte boundary or a general-protection exception may be generated. - _mm_
load_ ⚠pd1 (x86 or x86-64) and sse2
- Loads a double-precision (64-bit) floating-point element from memory into both elements of returned vector.
- _mm_
load_ ⚠ps (x86 or x86-64) and sse
- Loads four
f32
values from aligned memory into a__m128
. If the pointer is not aligned to a 128-bit boundary (16 bytes) a general protection fault will be triggered (fatal program crash). - _mm_
load_ ⚠ps1 (x86 or x86-64) and sse
- Alias for
_mm_load1_ps
- _mm_
load_ ⚠sd (x86 or x86-64) and sse2
- Loads a 64-bit double-precision value to the low element of a 128-bit integer vector and clears the upper element.
- _mm_
load_ ⚠si128 (x86 or x86-64) and sse2
- Loads 128-bits of integer data from memory into a new vector.
- _mm_
load_ ⚠ss (x86 or x86-64) and sse
- Construct a
__m128
with the lowest element read fromp
and the other elements set to zero. - _mm_
loaddup_ ⚠pd (x86 or x86-64) and sse3
- Loads a double-precision (64-bit) floating-point element from memory into both elements of return vector.
- _mm_
loadh_ ⚠pd (x86 or x86-64) and sse2
- Loads a double-precision value into the high-order bits of a 128-bit
vector of
[2 x double]
. The low-order bits are copied from the low-order bits of the first operand. - _mm_
loadl_ ⚠epi64 (x86 or x86-64) and sse2
- Loads 64-bit integer from memory into first element of returned vector.
- _mm_
loadl_ ⚠pd (x86 or x86-64) and sse2
- Loads a double-precision value into the low-order bits of a 128-bit
vector of
[2 x double]
. The high-order bits are copied from the high-order bits of the first operand. - _mm_
loadr_ ⚠pd (x86 or x86-64) and sse2
- Loads 2 double-precision (64-bit) floating-point elements from memory into
the returned vector in reverse order.
mem_addr
must be aligned on a 16-byte boundary or a general-protection exception may be generated. - _mm_
loadr_ ⚠ps (x86 or x86-64) and sse
- Loads four
f32
values from aligned memory into a__m128
in reverse order. - _mm_
loadu_ ⚠pd (x86 or x86-64) and sse2
- Loads 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from memory into the returned vector.
mem_addr
does not need to be aligned on any particular boundary. - _mm_
loadu_ ⚠ps (x86 or x86-64) and sse
- Loads four
f32
values from memory into a__m128
. There are no restrictions on memory alignment. For aligned memory_mm_load_ps
may be faster. - _mm_
loadu_ ⚠si16 (x86 or x86-64) and sse2
- Loads unaligned 16-bits of integer data from memory into new vector.
- _mm_
loadu_ ⚠si32 (x86 or x86-64) and sse2
- Loads unaligned 32-bits of integer data from memory into new vector.
- _mm_
loadu_ ⚠si64 (x86 or x86-64) and sse2
- Loads unaligned 64-bits of integer data from memory into new vector.
- _mm_
loadu_ ⚠si128 (x86 or x86-64) and sse2
- Loads 128-bits of integer data from memory into a new vector.
- _mm_
madd_ ⚠epi16 (x86 or x86-64) and sse2
- Multiplies and then horizontally add signed 16 bit integers in
a
andb
. - _mm_
maddubs_ ⚠epi16 (x86 or x86-64) and ssse3
- Multiplies corresponding pairs of packed 8-bit unsigned integer values contained in the first source operand and packed 8-bit signed integer values contained in the second source operand, add pairs of contiguous products with signed saturation, and writes the 16-bit sums to the corresponding bits in the destination.
- _mm_
mask_ ⚠i32gather_ epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i32gather_ epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i32gather_ pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i32gather_ ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i64gather_ epi32 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i64gather_ epi64 (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i64gather_ pd (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
mask_ ⚠i64gather_ ps (x86 or x86-64) and avx2
- Returns values from
slice
at offsets determined byoffsets * scale
, wherescale
should be 1, 2, 4 or 8. If mask is set, load the value fromsrc
in that position instead. - _mm_
maskload_ ⚠epi32 (x86 or x86-64) and avx2
- Loads packed 32-bit integers from memory pointed by
mem_addr
usingmask
(elements are zeroed out when the highest bit is not set in the corresponding element). - _mm_
maskload_ ⚠epi64 (x86 or x86-64) and avx2
- Loads packed 64-bit integers from memory pointed by
mem_addr
usingmask
(elements are zeroed out when the highest bit is not set in the corresponding element). - _mm_
maskload_ ⚠pd (x86 or x86-64) and avx
- Loads packed double-precision (64-bit) floating-point elements from memory
into result using
mask
(elements are zeroed out when the high bit of the corresponding element is not set). - _mm_
maskload_ ⚠ps (x86 or x86-64) and avx
- Loads packed single-precision (32-bit) floating-point elements from memory
into result using
mask
(elements are zeroed out when the high bit of the corresponding element is not set). - _mm_
maskmoveu_ ⚠si128 (x86 or x86-64) and sse2
- Conditionally store 8-bit integer elements from
a
into memory usingmask
. - _mm_
maskstore_ ⚠epi32 (x86 or x86-64) and avx2
- Stores packed 32-bit integers from
a
into memory pointed bymem_addr
usingmask
(elements are not stored when the highest bit is not set in the corresponding element). - _mm_
maskstore_ ⚠epi64 (x86 or x86-64) and avx2
- Stores packed 64-bit integers from
a
into memory pointed bymem_addr
usingmask
(elements are not stored when the highest bit is not set in the corresponding element). - _mm_
maskstore_ ⚠pd (x86 or x86-64) and avx
- Stores packed double-precision (64-bit) floating-point elements from
a
into memory usingmask
. - _mm_
maskstore_ ⚠ps (x86 or x86-64) and avx
- Stores packed single-precision (32-bit) floating-point elements from
a
into memory usingmask
. - _mm_
max_ ⚠epi8 (x86 or x86-64) and sse4.1
- Compares packed 8-bit integers in
a
andb
and returns packed maximum values in dst. - _mm_
max_ ⚠epi16 (x86 or x86-64) and sse2
- Compares packed 16-bit integers in
a
andb
, and returns the packed maximum values. - _mm_
max_ ⚠epi32 (x86 or x86-64) and sse4.1
- Compares packed 32-bit integers in
a
andb
, and returns packed maximum values. - _mm_
max_ ⚠epu8 (x86 or x86-64) and sse2
- Compares packed unsigned 8-bit integers in
a
andb
, and returns the packed maximum values. - _mm_
max_ ⚠epu16 (x86 or x86-64) and sse4.1
- Compares packed unsigned 16-bit integers in
a
andb
, and returns packed maximum. - _mm_
max_ ⚠epu32 (x86 or x86-64) and sse4.1
- Compares packed unsigned 32-bit integers in
a
andb
, and returns packed maximum values. - _mm_
max_ ⚠pd (x86 or x86-64) and sse2
- Returns a new vector with the maximum values from corresponding elements in
a
andb
. - _mm_
max_ ⚠ps (x86 or x86-64) and sse
- Compares packed single-precision (32-bit) floating-point elements in
a
andb
, and return the corresponding maximum values. - _mm_
max_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the maximum of the lower elements ofa
andb
. - _mm_
max_ ⚠ss (x86 or x86-64) and sse
- Compares the first single-precision (32-bit) floating-point element of
a
andb
, and return the maximum value in the first element of the return value, the other elements are copied froma
. - _mm_
mfence ⚠(x86 or x86-64) and sse2
- Performs a serializing operation on all load-from-memory and store-to-memory instructions that were issued prior to this instruction.
- _mm_
min_ ⚠epi8 (x86 or x86-64) and sse4.1
- Compares packed 8-bit integers in
a
andb
and returns packed minimum values in dst. - _mm_
min_ ⚠epi16 (x86 or x86-64) and sse2
- Compares packed 16-bit integers in
a
andb
, and returns the packed minimum values. - _mm_
min_ ⚠epi32 (x86 or x86-64) and sse4.1
- Compares packed 32-bit integers in
a
andb
, and returns packed minimum values. - _mm_
min_ ⚠epu8 (x86 or x86-64) and sse2
- Compares packed unsigned 8-bit integers in
a
andb
, and returns the packed minimum values. - _mm_
min_ ⚠epu16 (x86 or x86-64) and sse4.1
- Compares packed unsigned 16-bit integers in
a
andb
, and returns packed minimum. - _mm_
min_ ⚠epu32 (x86 or x86-64) and sse4.1
- Compares packed unsigned 32-bit integers in
a
andb
, and returns packed minimum values. - _mm_
min_ ⚠pd (x86 or x86-64) and sse2
- Returns a new vector with the minimum values from corresponding elements in
a
andb
. - _mm_
min_ ⚠ps (x86 or x86-64) and sse
- Compares packed single-precision (32-bit) floating-point elements in
a
andb
, and return the corresponding minimum values. - _mm_
min_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the minimum of the lower elements ofa
andb
. - _mm_
min_ ⚠ss (x86 or x86-64) and sse
- Compares the first single-precision (32-bit) floating-point element of
a
andb
, and return the minimum value in the first element of the return value, the other elements are copied froma
. - _mm_
minpos_ ⚠epu16 (x86 or x86-64) and sse4.1
- Finds the minimum unsigned 16-bit element in the 128-bit __m128i vector, returning a vector containing its value in its first position, and its index in its second position; all other elements are set to zero.
- _mm_
move_ ⚠epi64 (x86 or x86-64) and sse2
- Returns a vector where the low element is extracted from
a
and its upper element is zero. - _mm_
move_ ⚠sd (x86 or x86-64) and sse2
- Constructs a 128-bit floating-point vector of
[2 x double]
. The lower 64 bits are set to the lower 64 bits of the second parameter. The upper 64 bits are set to the upper 64 bits of the first parameter. - _mm_
move_ ⚠ss (x86 or x86-64) and sse
- Returns a
__m128
with the first component fromb
and the remaining components froma
. - _mm_
movedup_ ⚠pd (x86 or x86-64) and sse3
- Duplicate the low double-precision (64-bit) floating-point element
from
a
. - _mm_
movehdup_ ⚠ps (x86 or x86-64) and sse3
- Duplicate odd-indexed single-precision (32-bit) floating-point elements
from
a
. - _mm_
movehl_ ⚠ps (x86 or x86-64) and sse
- Combine higher half of
a
andb
. The higher half ofb
occupies the lower half of result. - _mm_
moveldup_ ⚠ps (x86 or x86-64) and sse3
- Duplicate even-indexed single-precision (32-bit) floating-point elements
from
a
. - _mm_
movelh_ ⚠ps (x86 or x86-64) and sse
- Combine lower half of
a
andb
. The lower half ofb
occupies the higher half of result. - _mm_
movemask_ ⚠epi8 (x86 or x86-64) and sse2
- Returns a mask of the most significant bit of each element in
a
. - _mm_
movemask_ ⚠pd (x86 or x86-64) and sse2
- Returns a mask of the most significant bit of each element in
a
. - _mm_
movemask_ ⚠ps (x86 or x86-64) and sse
- Returns a mask of the most significant bit of each element in
a
. - _mm_
mpsadbw_ ⚠epu8 (x86 or x86-64) and sse4.1
- Subtracts 8-bit unsigned integer values and computes the absolute values of the differences to the corresponding bits in the destination. Then sums of the absolute differences are returned according to the bit fields in the immediate operand.
- _mm_
mul_ ⚠epi32 (x86 or x86-64) and sse4.1
- Multiplies the low 32-bit integers from each packed 64-bit
element in
a
andb
, and returns the signed 64-bit result. - _mm_
mul_ ⚠epu32 (x86 or x86-64) and sse2
- Multiplies the low unsigned 32-bit integers from each packed 64-bit element
in
a
andb
. - _mm_
mul_ ⚠pd (x86 or x86-64) and sse2
- Multiplies packed double-precision (64-bit) floating-point elements in
a
andb
. - _mm_
mul_ ⚠ps (x86 or x86-64) and sse
- Multiplies packed single-precision (32-bit) floating-point elements in
a
andb
. - _mm_
mul_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by multiplying the low elements ofa
andb
. - _mm_
mul_ ⚠ss (x86 or x86-64) and sse
- Multiplies the first component of
a
andb
, the other components are copied froma
. - _mm_
mulhi_ ⚠epi16 (x86 or x86-64) and sse2
- Multiplies the packed 16-bit integers in
a
andb
. - _mm_
mulhi_ ⚠epu16 (x86 or x86-64) and sse2
- Multiplies the packed unsigned 16-bit integers in
a
andb
. - _mm_
mulhrs_ ⚠epi16 (x86 or x86-64) and ssse3
- Multiplies packed 16-bit signed integer values, truncate the 32-bit
product to the 18 most significant bits by right-shifting, round the
truncated value by adding 1, and write bits
[16:1]
to the destination. - _mm_
mullo_ ⚠epi16 (x86 or x86-64) and sse2
- Multiplies the packed 16-bit integers in
a
andb
. - _mm_
mullo_ ⚠epi32 (x86 or x86-64) and sse4.1
- Multiplies the packed 32-bit integers in
a
andb
, producing intermediate 64-bit integers, and returns the lowest 32-bit, whatever they might be, reinterpreted as a signed integer. Whilepmulld __m128i::splat(2), __m128i::splat(2)
returns the obvious__m128i::splat(4)
, due to wrapping arithmeticpmulld __m128i::splat(i32::MAX), __m128i::splat(2)
would return a negative number. - _mm_
or_ ⚠pd (x86 or x86-64) and sse2
- Computes the bitwise OR of
a
andb
. - _mm_
or_ ⚠ps (x86 or x86-64) and sse
- Bitwise OR of packed single-precision (32-bit) floating-point elements.
- _mm_
or_ ⚠si128 (x86 or x86-64) and sse2
- Computes the bitwise OR of 128 bits (representing integer data) in
a
andb
. - _mm_
packs_ ⚠epi16 (x86 or x86-64) and sse2
- Converts packed 16-bit integers from
a
andb
to packed 8-bit integers using signed saturation. - _mm_
packs_ ⚠epi32 (x86 or x86-64) and sse2
- Converts packed 32-bit integers from
a
andb
to packed 16-bit integers using signed saturation. - _mm_
packus_ ⚠epi16 (x86 or x86-64) and sse2
- Converts packed 16-bit integers from
a
andb
to packed 8-bit integers using unsigned saturation. - _mm_
packus_ ⚠epi32 (x86 or x86-64) and sse4.1
- Converts packed 32-bit integers from
a
andb
to packed 16-bit integers using unsigned saturation - _mm_
pause ⚠x86 or x86-64 - Provides a hint to the processor that the code sequence is a spin-wait loop.
- _mm_
permute_ ⚠pd (x86 or x86-64) and avx
- Shuffles double-precision (64-bit) floating-point elements in
a
using the control inimm8
. - _mm_
permute_ ⚠ps (x86 or x86-64) and avx
- Shuffles single-precision (32-bit) floating-point elements in
a
using the control inimm8
. - _mm_
permutevar_ ⚠pd (x86 or x86-64) and avx
- Shuffles double-precision (64-bit) floating-point elements in
a
using the control inb
. - _mm_
permutevar_ ⚠ps (x86 or x86-64) and avx
- Shuffles single-precision (32-bit) floating-point elements in
a
using the control inb
. - _mm_
prefetch ⚠(x86 or x86-64) and sse
- Fetch the cache line that contains address
p
using the givenSTRATEGY
. - _mm_
rcp_ ⚠ps (x86 or x86-64) and sse
- Returns the approximate reciprocal of packed single-precision (32-bit)
floating-point elements in
a
. - _mm_
rcp_ ⚠ss (x86 or x86-64) and sse
- Returns the approximate reciprocal of the first single-precision
(32-bit) floating-point element in
a
, the other elements are unchanged. - _mm_
round_ ⚠pd (x86 or x86-64) and sse4.1
- Round the packed double-precision (64-bit) floating-point elements in
a
using theROUNDING
parameter, and stores the results as packed double-precision floating-point elements. Rounding is done according to the rounding parameter, which can be one of: - _mm_
round_ ⚠ps (x86 or x86-64) and sse4.1
- Round the packed single-precision (32-bit) floating-point elements in
a
using theROUNDING
parameter, and stores the results as packed single-precision floating-point elements. Rounding is done according to the rounding parameter, which can be one of: - _mm_
round_ ⚠sd (x86 or x86-64) and sse4.1
- Round the lower double-precision (64-bit) floating-point element in
b
using theROUNDING
parameter, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copies the upper element froma
to the upper element of the intrinsic result. Rounding is done according to the rounding parameter, which can be one of: - _mm_
round_ ⚠ss (x86 or x86-64) and sse4.1
- Round the lower single-precision (32-bit) floating-point element in
b
using theROUNDING
parameter, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copies the upper 3 packed elements froma
to the upper elements of the intrinsic result. Rounding is done according to the rounding parameter, which can be one of: - _mm_
rsqrt_ ⚠ps (x86 or x86-64) and sse
- Returns the approximate reciprocal square root of packed single-precision
(32-bit) floating-point elements in
a
. - _mm_
rsqrt_ ⚠ss (x86 or x86-64) and sse
- Returns the approximate reciprocal square root of the first single-precision
(32-bit) floating-point element in
a
, the other elements are unchanged. - _mm_
sad_ ⚠epu8 (x86 or x86-64) and sse2
- Sum the absolute differences of packed unsigned 8-bit integers.
- _mm_
set1_ ⚠epi8 (x86 or x86-64) and sse2
- Broadcasts 8-bit integer
a
to all elements. - _mm_
set1_ ⚠epi16 (x86 or x86-64) and sse2
- Broadcasts 16-bit integer
a
to all elements. - _mm_
set1_ ⚠epi32 (x86 or x86-64) and sse2
- Broadcasts 32-bit integer
a
to all elements. - _mm_
set1_ ⚠epi64x (x86 or x86-64) and sse2
- Broadcasts 64-bit integer
a
to all elements. - _mm_
set1_ ⚠pd (x86 or x86-64) and sse2
- Broadcasts double-precision (64-bit) floating-point value a to all elements of the return value.
- _mm_
set1_ ⚠ps (x86 or x86-64) and sse
- Construct a
__m128
with all element set toa
. - _mm_
set_ ⚠epi8 (x86 or x86-64) and sse2
- Sets packed 8-bit integers with the supplied values.
- _mm_
set_ ⚠epi16 (x86 or x86-64) and sse2
- Sets packed 16-bit integers with the supplied values.
- _mm_
set_ ⚠epi32 (x86 or x86-64) and sse2
- Sets packed 32-bit integers with the supplied values.
- _mm_
set_ ⚠epi64x (x86 or x86-64) and sse2
- Sets packed 64-bit integers with the supplied values, from highest to lowest.
- _mm_
set_ ⚠pd (x86 or x86-64) and sse2
- Sets packed double-precision (64-bit) floating-point elements in the return value with the supplied values.
- _mm_
set_ ⚠pd1 (x86 or x86-64) and sse2
- Broadcasts double-precision (64-bit) floating-point value a to all elements of the return value.
- _mm_
set_ ⚠ps (x86 or x86-64) and sse
- Construct a
__m128
from four floating point values highest to lowest. - _mm_
set_ ⚠ps1 (x86 or x86-64) and sse
- Alias for
_mm_set1_ps
- _mm_
set_ ⚠sd (x86 or x86-64) and sse2
- Copies double-precision (64-bit) floating-point element
a
to the lower element of the packed 64-bit return value. - _mm_
set_ ⚠ss (x86 or x86-64) and sse
- Construct a
__m128
with the lowest element set toa
and the rest set to zero. - _mm_
setcsr ⚠Deprecated (x86 or x86-64) and sse
- Sets the MXCSR register with the 32-bit unsigned integer value.
- _mm_
setr_ ⚠epi8 (x86 or x86-64) and sse2
- Sets packed 8-bit integers with the supplied values in reverse order.
- _mm_
setr_ ⚠epi16 (x86 or x86-64) and sse2
- Sets packed 16-bit integers with the supplied values in reverse order.
- _mm_
setr_ ⚠epi32 (x86 or x86-64) and sse2
- Sets packed 32-bit integers with the supplied values in reverse order.
- _mm_
setr_ ⚠pd (x86 or x86-64) and sse2
- Sets packed double-precision (64-bit) floating-point elements in the return value with the supplied values in reverse order.
- _mm_
setr_ ⚠ps (x86 or x86-64) and sse
- Construct a
__m128
from four floating point values lowest to highest. - _mm_
setzero_ ⚠pd (x86 or x86-64) and sse2
- Returns packed double-precision (64-bit) floating-point elements with all zeros.
- _mm_
setzero_ ⚠ps (x86 or x86-64) and sse
- Construct a
__m128
with all elements initialized to zero. - _mm_
setzero_ ⚠si128 (x86 or x86-64) and sse2
- Returns a vector with all elements set to zero.
- _mm_
sfence ⚠(x86 or x86-64) and sse
- Performs a serializing operation on all non-temporal (“streaming”) store instructions that were issued by the current thread prior to this instruction.
- _mm_
sha1msg1_ ⚠epu32 (x86 or x86-64) and sha
- Performs an intermediate calculation for the next four SHA1 message values
(unsigned 32-bit integers) using previous message values from
a
andb
, and returning the result. - _mm_
sha1msg2_ ⚠epu32 (x86 or x86-64) and sha
- Performs the final calculation for the next four SHA1 message values
(unsigned 32-bit integers) using the intermediate result in
a
and the previous message values inb
, and returns the result. - _mm_
sha1nexte_ ⚠epu32 (x86 or x86-64) and sha
- Calculate SHA1 state variable E after four rounds of operation from the
current SHA1 state variable
a
, add that value to the scheduled values (unsigned 32-bit integers) inb
, and returns the result. - _mm_
sha1rnds4_ ⚠epu32 (x86 or x86-64) and sha
- Performs four rounds of SHA1 operation using an initial SHA1 state (A,B,C,D)
from
a
and some pre-computed sum of the next 4 round message values (unsigned 32-bit integers), and state variable E fromb
, and return the updated SHA1 state (A,B,C,D).FUNC
contains the logic functions and round constants. - _mm_
sha256msg1_ ⚠epu32 (x86 or x86-64) and sha
- Performs an intermediate calculation for the next four SHA256 message values
(unsigned 32-bit integers) using previous message values from
a
andb
, and return the result. - _mm_
sha256msg2_ ⚠epu32 (x86 or x86-64) and sha
- Performs the final calculation for the next four SHA256 message values
(unsigned 32-bit integers) using previous message values from
a
andb
, and return the result. - _mm_
sha256rnds2_ ⚠epu32 (x86 or x86-64) and sha
- Performs 2 rounds of SHA256 operation using an initial SHA256 state
(C,D,G,H) from
a
, an initial SHA256 state (A,B,E,F) fromb
, and a pre-computed sum of the next 2 round message values (unsigned 32-bit integers) and the corresponding round constants fromk
, and store the updated SHA256 state (A,B,E,F) in dst. - _mm_
shuffle_ ⚠epi8 (x86 or x86-64) and ssse3
- Shuffles bytes from
a
according to the content ofb
. - _mm_
shuffle_ ⚠epi32 (x86 or x86-64) and sse2
- Shuffles 32-bit integers in
a
using the control inIMM8
. - _mm_
shuffle_ ⚠pd (x86 or x86-64) and sse2
- Constructs a 128-bit floating-point vector of
[2 x double]
from two 128-bit vector parameters of[2 x double]
, using the immediate-value parameter as a specifier. - _mm_
shuffle_ ⚠ps (x86 or x86-64) and sse
- Shuffles packed single-precision (32-bit) floating-point elements in
a
andb
usingMASK
. - _mm_
shufflehi_ ⚠epi16 (x86 or x86-64) and sse2
- Shuffles 16-bit integers in the high 64 bits of
a
using the control inIMM8
. - _mm_
shufflelo_ ⚠epi16 (x86 or x86-64) and sse2
- Shuffles 16-bit integers in the low 64 bits of
a
using the control inIMM8
. - _mm_
sign_ ⚠epi8 (x86 or x86-64) and ssse3
- Negates packed 8-bit integers in
a
when the corresponding signed 8-bit integer inb
is negative, and returns the result. Elements in result are zeroed out when the corresponding element inb
is zero. - _mm_
sign_ ⚠epi16 (x86 or x86-64) and ssse3
- Negates packed 16-bit integers in
a
when the corresponding signed 16-bit integer inb
is negative, and returns the results. Elements in result are zeroed out when the corresponding element inb
is zero. - _mm_
sign_ ⚠epi32 (x86 or x86-64) and ssse3
- Negates packed 32-bit integers in
a
when the corresponding signed 32-bit integer inb
is negative, and returns the results. Element in result are zeroed out when the corresponding element inb
is zero. - _mm_
sll_ ⚠epi16 (x86 or x86-64) and sse2
- Shifts packed 16-bit integers in
a
left bycount
while shifting in zeros. - _mm_
sll_ ⚠epi32 (x86 or x86-64) and sse2
- Shifts packed 32-bit integers in
a
left bycount
while shifting in zeros. - _mm_
sll_ ⚠epi64 (x86 or x86-64) and sse2
- Shifts packed 64-bit integers in
a
left bycount
while shifting in zeros. - _mm_
slli_ ⚠epi16 (x86 or x86-64) and sse2
- Shifts packed 16-bit integers in
a
left byIMM8
while shifting in zeros. - _mm_
slli_ ⚠epi32 (x86 or x86-64) and sse2
- Shifts packed 32-bit integers in
a
left byIMM8
while shifting in zeros. - _mm_
slli_ ⚠epi64 (x86 or x86-64) and sse2
- Shifts packed 64-bit integers in
a
left byIMM8
while shifting in zeros. - _mm_
slli_ ⚠si128 (x86 or x86-64) and sse2
- Shifts
a
left byIMM8
bytes while shifting in zeros. - _mm_
sllv_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
left by the amount specified by the corresponding element incount
while shifting in zeros, and returns the result. - _mm_
sllv_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
left by the amount specified by the corresponding element incount
while shifting in zeros, and returns the result. - _mm_
sqrt_ ⚠pd (x86 or x86-64) and sse2
- Returns a new vector with the square root of each of the values in
a
. - _mm_
sqrt_ ⚠ps (x86 or x86-64) and sse
- Returns the square root of packed single-precision (32-bit) floating-point
elements in
a
. - _mm_
sqrt_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by the square root of the lower elementb
. - _mm_
sqrt_ ⚠ss (x86 or x86-64) and sse
- Returns the square root of the first single-precision (32-bit)
floating-point element in
a
, the other elements are unchanged. - _mm_
sra_ ⚠epi16 (x86 or x86-64) and sse2
- Shifts packed 16-bit integers in
a
right bycount
while shifting in sign bits. - _mm_
sra_ ⚠epi32 (x86 or x86-64) and sse2
- Shifts packed 32-bit integers in
a
right bycount
while shifting in sign bits. - _mm_
srai_ ⚠epi16 (x86 or x86-64) and sse2
- Shifts packed 16-bit integers in
a
right byIMM8
while shifting in sign bits. - _mm_
srai_ ⚠epi32 (x86 or x86-64) and sse2
- Shifts packed 32-bit integers in
a
right byIMM8
while shifting in sign bits. - _mm_
srav_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right by the amount specified by the corresponding element incount
while shifting in sign bits. - _mm_
srl_ ⚠epi16 (x86 or x86-64) and sse2
- Shifts packed 16-bit integers in
a
right bycount
while shifting in zeros. - _mm_
srl_ ⚠epi32 (x86 or x86-64) and sse2
- Shifts packed 32-bit integers in
a
right bycount
while shifting in zeros. - _mm_
srl_ ⚠epi64 (x86 or x86-64) and sse2
- Shifts packed 64-bit integers in
a
right bycount
while shifting in zeros. - _mm_
srli_ ⚠epi16 (x86 or x86-64) and sse2
- Shifts packed 16-bit integers in
a
right byIMM8
while shifting in zeros. - _mm_
srli_ ⚠epi32 (x86 or x86-64) and sse2
- Shifts packed 32-bit integers in
a
right byIMM8
while shifting in zeros. - _mm_
srli_ ⚠epi64 (x86 or x86-64) and sse2
- Shifts packed 64-bit integers in
a
right byIMM8
while shifting in zeros. - _mm_
srli_ ⚠si128 (x86 or x86-64) and sse2
- Shifts
a
right byIMM8
bytes while shifting in zeros. - _mm_
srlv_ ⚠epi32 (x86 or x86-64) and avx2
- Shifts packed 32-bit integers in
a
right by the amount specified by the corresponding element incount
while shifting in zeros, - _mm_
srlv_ ⚠epi64 (x86 or x86-64) and avx2
- Shifts packed 64-bit integers in
a
right by the amount specified by the corresponding element incount
while shifting in zeros, - _mm_
store1_ ⚠pd (x86 or x86-64) and sse2
- Stores the lower double-precision (64-bit) floating-point element from
a
into 2 contiguous elements in memory.mem_addr
must be aligned on a 16-byte boundary or a general-protection exception may be generated. - _mm_
store1_ ⚠ps (x86 or x86-64) and sse
- Stores the lowest 32 bit float of
a
repeated four times into aligned memory. - _mm_
store_ ⚠pd (x86 or x86-64) and sse2
- Stores 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from
a
into memory.mem_addr
must be aligned on a 16-byte boundary or a general-protection exception may be generated. - _mm_
store_ ⚠pd1 (x86 or x86-64) and sse2
- Stores the lower double-precision (64-bit) floating-point element from
a
into 2 contiguous elements in memory.mem_addr
must be aligned on a 16-byte boundary or a general-protection exception may be generated. - _mm_
store_ ⚠ps (x86 or x86-64) and sse
- Stores four 32-bit floats into aligned memory.
- _mm_
store_ ⚠ps1 (x86 or x86-64) and sse
- Alias for
_mm_store1_ps
- _mm_
store_ ⚠sd (x86 or x86-64) and sse2
- Stores the lower 64 bits of a 128-bit vector of
[2 x double]
to a memory location. - _mm_
store_ ⚠si128 (x86 or x86-64) and sse2
- Stores 128-bits of integer data from
a
into memory. - _mm_
store_ ⚠ss (x86 or x86-64) and sse
- Stores the lowest 32 bit float of
a
into memory. - _mm_
storeh_ ⚠pd (x86 or x86-64) and sse2
- Stores the upper 64 bits of a 128-bit vector of
[2 x double]
to a memory location. - _mm_
storel_ ⚠epi64 (x86 or x86-64) and sse2
- Stores the lower 64-bit integer
a
to a memory location. - _mm_
storel_ ⚠pd (x86 or x86-64) and sse2
- Stores the lower 64 bits of a 128-bit vector of
[2 x double]
to a memory location. - _mm_
storer_ ⚠pd (x86 or x86-64) and sse2
- Stores 2 double-precision (64-bit) floating-point elements from
a
into memory in reverse order.mem_addr
must be aligned on a 16-byte boundary or a general-protection exception may be generated. - _mm_
storer_ ⚠ps (x86 or x86-64) and sse
- Stores four 32-bit floats into aligned memory in reverse order.
- _mm_
storeu_ ⚠pd (x86 or x86-64) and sse2
- Stores 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from
a
into memory.mem_addr
does not need to be aligned on any particular boundary. - _mm_
storeu_ ⚠ps (x86 or x86-64) and sse
- Stores four 32-bit floats into memory. There are no restrictions on memory
alignment. For aligned memory
_mm_store_ps
may be faster. - _mm_
storeu_ ⚠si16 (x86 or x86-64) and sse2
- Store 16-bit integer from the first element of a into memory.
- _mm_
storeu_ ⚠si32 (x86 or x86-64) and sse2
- Store 32-bit integer from the first element of a into memory.
- _mm_
storeu_ ⚠si64 (x86 or x86-64) and sse2
- Store 64-bit integer from the first element of a into memory.
- _mm_
storeu_ ⚠si128 (x86 or x86-64) and sse2
- Stores 128-bits of integer data from
a
into memory. - _mm_
stream_ ⚠load_ si128 (x86 or x86-64) and sse4.1
- Load 128-bits of integer data from memory into dst. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon)
- _mm_
stream_ ⚠pd (x86 or x86-64) and sse2
- Stores a 128-bit floating point vector of
[2 x double]
to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon). - _mm_
stream_ ⚠ps (x86 or x86-64) and sse
- Stores
a
into the memory atmem_addr
using a non-temporal memory hint. - _mm_
stream_ ⚠sd (x86 or x86-64) and sse4a
- Non-temporal store of
a.0
intop
. - _mm_
stream_ ⚠si32 (x86 or x86-64) and sse2
- Stores a 32-bit integer value in the specified memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).
- _mm_
stream_ ⚠si128 (x86 or x86-64) and sse2
- Stores a 128-bit integer vector to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).
- _mm_
stream_ ⚠ss (x86 or x86-64) and sse4a
- Non-temporal store of
a.0
intop
. - _mm_
sub_ ⚠epi8 (x86 or x86-64) and sse2
- Subtracts packed 8-bit integers in
b
from packed 8-bit integers ina
. - _mm_
sub_ ⚠epi16 (x86 or x86-64) and sse2
- Subtracts packed 16-bit integers in
b
from packed 16-bit integers ina
. - _mm_
sub_ ⚠epi32 (x86 or x86-64) and sse2
- Subtract packed 32-bit integers in
b
from packed 32-bit integers ina
. - _mm_
sub_ ⚠epi64 (x86 or x86-64) and sse2
- Subtract packed 64-bit integers in
b
from packed 64-bit integers ina
. - _mm_
sub_ ⚠pd (x86 or x86-64) and sse2
- Subtract packed double-precision (64-bit) floating-point elements in
b
froma
. - _mm_
sub_ ⚠ps (x86 or x86-64) and sse
- Subtracts packed single-precision (32-bit) floating-point elements in
a
andb
. - _mm_
sub_ ⚠sd (x86 or x86-64) and sse2
- Returns a new vector with the low element of
a
replaced by subtracting the low element byb
from the low element ofa
. - _mm_
sub_ ⚠ss (x86 or x86-64) and sse
- Subtracts the first component of
b
froma
, the other components are copied froma
. - _mm_
subs_ ⚠epi8 (x86 or x86-64) and sse2
- Subtract packed 8-bit integers in
b
from packed 8-bit integers ina
using saturation. - _mm_
subs_ ⚠epi16 (x86 or x86-64) and sse2
- Subtract packed 16-bit integers in
b
from packed 16-bit integers ina
using saturation. - _mm_
subs_ ⚠epu8 (x86 or x86-64) and sse2
- Subtract packed unsigned 8-bit integers in
b
from packed unsigned 8-bit integers ina
using saturation. - _mm_
subs_ ⚠epu16 (x86 or x86-64) and sse2
- Subtract packed unsigned 16-bit integers in
b
from packed unsigned 16-bit integers ina
using saturation. - _mm_
test_ ⚠all_ ones (x86 or x86-64) and sse4.1
- Tests whether the specified bits in
a
128-bit integer vector are all ones. - _mm_
test_ ⚠all_ zeros (x86 or x86-64) and sse4.1
- Tests whether the specified bits in a 128-bit integer vector are all zeros.
- _mm_
test_ ⚠mix_ ones_ zeros (x86 or x86-64) and sse4.1
- Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones.
- _mm_
testc_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of 128 bits (representing double-precision (64-bit)
floating-point elements) in
a
andb
, producing an intermediate 128-bit value, and setZF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theCF
value. - _mm_
testc_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of 128 bits (representing single-precision (32-bit)
floating-point elements) in
a
andb
, producing an intermediate 128-bit value, and setZF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theCF
value. - _mm_
testc_ ⚠si128 (x86 or x86-64) and sse4.1
- Tests whether the specified bits in a 128-bit integer vector are all ones.
- _mm_
testnzc_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of 128 bits (representing double-precision (64-bit)
floating-point elements) in
a
andb
, producing an intermediate 128-bit value, and setZF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setCF
to 0. Return 1 if both theZF
andCF
values are zero, otherwise return 0. - _mm_
testnzc_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of 128 bits (representing single-precision (32-bit)
floating-point elements) in
a
andb
, producing an intermediate 128-bit value, and setZF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setCF
to 0. Return 1 if both theZF
andCF
values are zero, otherwise return 0. - _mm_
testnzc_ ⚠si128 (x86 or x86-64) and sse4.1
- Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones.
- _mm_
testz_ ⚠pd (x86 or x86-64) and avx
- Computes the bitwise AND of 128 bits (representing double-precision (64-bit)
floating-point elements) in
a
andb
, producing an intermediate 128-bit value, and setZF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theZF
value. - _mm_
testz_ ⚠ps (x86 or x86-64) and avx
- Computes the bitwise AND of 128 bits (representing single-precision (32-bit)
floating-point elements) in
a
andb
, producing an intermediate 128-bit value, and setZF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setZF
to 0. Compute the bitwise NOT ofa
and then AND withb
, producing an intermediate value, and setCF
to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise setCF
to 0. Return theZF
value. - _mm_
testz_ ⚠si128 (x86 or x86-64) and sse4.1
- Tests whether the specified bits in a 128-bit integer vector are all zeros.
- _mm_
tzcnt_ ⚠32 (x86 or x86-64) and bmi1
- Counts the number of trailing least significant zero bits.
- _mm_
ucomieq_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for equality. - _mm_
ucomieq_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if they are equal, or0
otherwise. This instruction will not signal an exception if either argument is a quiet NaN. - _mm_
ucomige_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for greater-than-or-equal. - _mm_
ucomige_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is greater than or equal to the one fromb
, or0
otherwise. This instruction will not signal an exception if either argument is a quiet NaN. - _mm_
ucomigt_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for greater-than. - _mm_
ucomigt_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is greater than the one fromb
, or0
otherwise. This instruction will not signal an exception if either argument is a quiet NaN. - _mm_
ucomile_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for less-than-or-equal. - _mm_
ucomile_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is less than or equal to the one fromb
, or0
otherwise. This instruction will not signal an exception if either argument is a quiet NaN. - _mm_
ucomilt_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for less-than. - _mm_
ucomilt_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if the value froma
is less than the one fromb
, or0
otherwise. This instruction will not signal an exception if either argument is a quiet NaN. - _mm_
ucomineq_ ⚠sd (x86 or x86-64) and sse2
- Compares the lower element of
a
andb
for not-equal. - _mm_
ucomineq_ ⚠ss (x86 or x86-64) and sse
- Compares two 32-bit floats from the low-order bits of
a
andb
. Returns1
if they are not equal, or0
otherwise. This instruction will not signal an exception if either argument is a quiet NaN. - _mm_
undefined_ ⚠pd (x86 or x86-64) and sse2
- Returns vector of type __m128d with indeterminate elements.
Despite being “undefined”, this is some valid value and not equivalent to
mem::MaybeUninit
. In practice, this is equivalent tomem::zeroed
. - _mm_
undefined_ ⚠ps (x86 or x86-64) and sse
- Returns vector of type __m128 with indeterminate elements.
Despite being “undefined”, this is some valid value and not equivalent to
mem::MaybeUninit
. In practice, this is equivalent tomem::zeroed
. - _mm_
undefined_ ⚠si128 (x86 or x86-64) and sse2
- Returns vector of type __m128i with indeterminate elements.
Despite being “undefined”, this is some valid value and not equivalent to
mem::MaybeUninit
. In practice, this is equivalent tomem::zeroed
. - _mm_
unpackhi_ ⚠epi8 (x86 or x86-64) and sse2
- Unpacks and interleave 8-bit integers from the high half of
a
andb
. - _mm_
unpackhi_ ⚠epi16 (x86 or x86-64) and sse2
- Unpacks and interleave 16-bit integers from the high half of
a
andb
. - _mm_
unpackhi_ ⚠epi32 (x86 or x86-64) and sse2
- Unpacks and interleave 32-bit integers from the high half of
a
andb
. - _mm_
unpackhi_ ⚠epi64 (x86 or x86-64) and sse2
- Unpacks and interleave 64-bit integers from the high half of
a
andb
. - _mm_
unpackhi_ ⚠pd (x86 or x86-64) and sse2
- The resulting
__m128d
element is composed by the low-order values of the two__m128d
interleaved input elements, i.e.: - _mm_
unpackhi_ ⚠ps (x86 or x86-64) and sse
- Unpacks and interleave single-precision (32-bit) floating-point elements
from the higher half of
a
andb
. - _mm_
unpacklo_ ⚠epi8 (x86 or x86-64) and sse2
- Unpacks and interleave 8-bit integers from the low half of
a
andb
. - _mm_
unpacklo_ ⚠epi16 (x86 or x86-64) and sse2
- Unpacks and interleave 16-bit integers from the low half of
a
andb
. - _mm_
unpacklo_ ⚠epi32 (x86 or x86-64) and sse2
- Unpacks and interleave 32-bit integers from the low half of
a
andb
. - _mm_
unpacklo_ ⚠epi64 (x86 or x86-64) and sse2
- Unpacks and interleave 64-bit integers from the low half of
a
andb
. - _mm_
unpacklo_ ⚠pd (x86 or x86-64) and sse2
- The resulting
__m128d
element is composed by the high-order values of the two__m128d
interleaved input elements, i.e.: - _mm_
unpacklo_ ⚠ps (x86 or x86-64) and sse
- Unpacks and interleave single-precision (32-bit) floating-point elements
from the lower half of
a
andb
. - _mm_
xor_ ⚠pd (x86 or x86-64) and sse2
- Computes the bitwise XOR of
a
andb
. - _mm_
xor_ ⚠ps (x86 or x86-64) and sse
- Bitwise exclusive OR of packed single-precision (32-bit) floating-point elements.
- _mm_
xor_ ⚠si128 (x86 or x86-64) and sse2
- Computes the bitwise XOR of 128 bits (representing integer data) in
a
andb
. - _mulx_
u32 ⚠(x86 or x86-64) and bmi2
- Unsigned multiply without affecting flags.
- _pdep_
u32 ⚠(x86 or x86-64) and bmi2
- Scatter contiguous low order bits of
a
to the result at the positions specified by themask
. - _pext_
u32 ⚠(x86 or x86-64) and bmi2
- Gathers the bits of
x
specified by themask
into the contiguous low order bit positions of the result. - _popcnt32⚠
(x86 or x86-64) and popcnt
- Counts the bits that are set.
- _rdrand16_
step ⚠(x86 or x86-64) and rdrand
- Read a hardware generated 16-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise.
- _rdrand32_
step ⚠(x86 or x86-64) and rdrand
- Read a hardware generated 32-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise.
- _rdseed16_
step ⚠(x86 or x86-64) and rdseed
- Read a 16-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.
- _rdseed32_
step ⚠(x86 or x86-64) and rdseed
- Read a 32-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.
- _rdtsc⚠
x86 or x86-64 - Reads the current value of the processor’s time-stamp counter.
- _subborrow_
u32 ⚠x86 or x86-64 - Adds unsigned 32-bit integers
a
andb
with unsigned 8-bit carry-inc_in
(carry or overflow flag), and store the unsigned 32-bit result inout
, and the carry-out is returned (carry or overflow flag). - _t1mskc_
u32 ⚠(x86 or x86-64) and tbm
- Clears all bits below the least significant zero of
x
and sets all other bits. - _tzcnt_
u16 ⚠(x86 or x86-64) and bmi1
- Counts the number of trailing least significant zero bits.
- _tzcnt_
u32 ⚠(x86 or x86-64) and bmi1
- Counts the number of trailing least significant zero bits.
- _tzmsk_
u32 ⚠(x86 or x86-64) and tbm
- Sets all bits below the least significant one of
x
and clears all other bits. - _xgetbv⚠
(x86 or x86-64) and xsave
- Reads the contents of the extended control register
XCR
specified inxcr_no
. - _xrstor⚠
(x86 or x86-64) and xsave
- Performs a full or partial restore of the enabled processor states using
the state information stored in memory at
mem_addr
. - _xrstors⚠
(x86 or x86-64) and xsave,xsaves
- Performs a full or partial restore of the enabled processor states using the
state information stored in memory at
mem_addr
. - _xsave⚠
(x86 or x86-64) and xsave
- Performs a full or partial save of the enabled processor states to memory at
mem_addr
. - _xsavec⚠
(x86 or x86-64) and xsave,xsavec
- Performs a full or partial save of the enabled processor states to memory
at
mem_addr
. - _xsaveopt⚠
(x86 or x86-64) and xsave,xsaveopt
- Performs a full or partial save of the enabled processor states to memory at
mem_addr
. - _xsaves⚠
(x86 or x86-64) and xsave,xsaves
- Performs a full or partial save of the enabled processor states to memory at
mem_addr
- _xsetbv⚠
(x86 or x86-64) and xsave
- Copies 64-bits from
val
to the extended control register (XCR
) specified bya
. - _MM_
SHUFFLE Experimental x86 or x86-64 - A utility function for creating masks to use with Intel shuffle and permute intrinsics.
- _cvtmask8_
u32 ⚠Experimental (x86 or x86-64) and avx512dq
- Convert 8-bit mask a to a 32-bit integer value and store the result in dst.
- _cvtmask16_
u32 ⚠Experimental (x86 or x86-64) and avx512f
- Convert 16-bit mask a into an integer value, and store the result in dst.
- _cvtmask32_
u32 ⚠Experimental (x86 or x86-64) and avx512bw
- Convert 32-bit mask a into an integer value, and store the result in dst.
- _cvtu32_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Convert 32-bit integer value a to an 8-bit mask and store the result in dst.
- _cvtu32_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Convert 32-bit integer value a to an 16-bit mask and store the result in dst.
- _cvtu32_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Convert integer value a into an 32-bit mask, and store the result in k.
- _kadd_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Add 8-bit masks a and b, and store the result in dst.
- _kadd_
mask16 ⚠Experimental (x86 or x86-64) and avx512dq
- Add 16-bit masks a and b, and store the result in dst.
- _kadd_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Add 32-bit masks in a and b, and store the result in k.
- _kadd_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Add 64-bit masks in a and b, and store the result in k.
- _kand_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Bitwise AND of 8-bit masks a and b, and store the result in dst.
- _kand_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Compute the bitwise AND of 16-bit masks a and b, and store the result in k.
- _kand_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise AND of 32-bit masks a and b, and store the result in k.
- _kand_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise AND of 64-bit masks a and b, and store the result in k.
- _kandn_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Bitwise AND NOT of 8-bit masks a and b, and store the result in dst.
- _kandn_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Compute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k.
- _kandn_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise NOT of 32-bit masks a and then AND with b, and store the result in k.
- _kandn_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise NOT of 64-bit masks a and then AND with b, and store the result in k.
- _knot_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Bitwise NOT of 8-bit mask a, and store the result in dst.
- _knot_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Compute the bitwise NOT of 16-bit mask a, and store the result in k.
- _knot_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise NOT of 32-bit mask a, and store the result in k.
- _knot_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise NOT of 64-bit mask a, and store the result in k.
- _kor_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Bitwise OR of 8-bit masks a and b, and store the result in dst.
- _kor_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Compute the bitwise OR of 16-bit masks a and b, and store the result in k.
- _kor_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 32-bit masks a and b, and store the result in k.
- _kor_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 64-bit masks a and b, and store the result in k.
- _kortest_
mask8_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise OR of 8-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst. If the result is all ones, store 1 in all_ones, otherwise store 0 in all_ones.
- _kortest_
mask16_ ⚠u8 Experimental (x86 or x86-64) and avx512f
- Compute the bitwise OR of 16-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst. If the result is all ones, store 1 in all_ones, otherwise store 0 in all_ones.
- _kortest_
mask32_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 32-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst. If the result is all ones, store 1 in all_ones, otherwise store 0 in all_ones.
- _kortest_
mask64_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 64-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst. If the result is all ones, store 1 in all_ones, otherwise store 0 in all_ones.
- _kortestc_
mask8_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise OR of 8-bit masks a and b. If the result is all ones, store 1 in dst, otherwise store 0 in dst.
- _kortestc_
mask16_ ⚠u8 Experimental (x86 or x86-64) and avx512f
- Compute the bitwise OR of 16-bit masks a and b. If the result is all ones, store 1 in dst, otherwise store 0 in dst.
- _kortestc_
mask32_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 32-bit masks a and b. If the result is all ones, store 1 in dst, otherwise store 0 in dst.
- _kortestc_
mask64_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 64-bit masks a and b. If the result is all ones, store 1 in dst, otherwise store 0 in dst.
- _kortestz_
mask8_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise OR of 8-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _kortestz_
mask16_ ⚠u8 Experimental (x86 or x86-64) and avx512f
- Compute the bitwise OR of 16-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _kortestz_
mask32_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 32-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _kortestz_
mask64_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise OR of 64-bit masks a and b. If the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _kshiftli_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Shift 8-bit mask a left by count bits while shifting in zeros, and store the result in dst.
- _kshiftli_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Shift 16-bit mask a left by count bits while shifting in zeros, and store the result in dst.
- _kshiftli_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Shift the bits of 32-bit mask a left by count while shifting in zeros, and store the least significant 32 bits of the result in k.
- _kshiftli_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Shift the bits of 64-bit mask a left by count while shifting in zeros, and store the least significant 32 bits of the result in k.
- _kshiftri_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Shift 8-bit mask a right by count bits while shifting in zeros, and store the result in dst.
- _kshiftri_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Shift 16-bit mask a right by count bits while shifting in zeros, and store the result in dst.
- _kshiftri_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Shift the bits of 32-bit mask a right by count while shifting in zeros, and store the least significant 32 bits of the result in k.
- _kshiftri_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Shift the bits of 64-bit mask a right by count while shifting in zeros, and store the least significant 32 bits of the result in k.
- _ktest_
mask8_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise AND of 8-bit masks a and b, and if the result is all zeros, store 1 in dst, otherwise store 0 in dst. Compute the bitwise NOT of a and then AND with b, if the result is all zeros, store 1 in and_not, otherwise store 0 in and_not.
- _ktest_
mask16_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise AND of 16-bit masks a and b, and if the result is all zeros, store 1 in dst, otherwise store 0 in dst. Compute the bitwise NOT of a and then AND with b, if the result is all zeros, store 1 in and_not, otherwise store 0 in and_not.
- _ktest_
mask32_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise AND of 32-bit masks a and b, and if the result is all zeros, store 1 in dst, otherwise store 0 in dst. Compute the bitwise NOT of a and then AND with b, if the result is all zeros, store 1 in and_not, otherwise store 0 in and_not.
- _ktest_
mask64_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise AND of 64-bit masks a and b, and if the result is all zeros, store 1 in dst, otherwise store 0 in dst. Compute the bitwise NOT of a and then AND with b, if the result is all zeros, store 1 in and_not, otherwise store 0 in and_not.
- _ktestc_
mask8_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise NOT of 8-bit mask a and then AND with 8-bit mask b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestc_
mask16_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise NOT of 16-bit mask a and then AND with 16-bit mask b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestc_
mask32_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise NOT of 32-bit mask a and then AND with 16-bit mask b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestc_
mask64_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise NOT of 64-bit mask a and then AND with 8-bit mask b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestz_
mask8_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise AND of 8-bit masks a and b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestz_
mask16_ ⚠u8 Experimental (x86 or x86-64) and avx512dq
- Compute the bitwise AND of 16-bit masks a and b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestz_
mask32_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise AND of 32-bit masks a and b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _ktestz_
mask64_ ⚠u8 Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise AND of 64-bit masks a and b, if the result is all zeros, store 1 in dst, otherwise store 0 in dst.
- _kxnor_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Bitwise XNOR of 8-bit masks a and b, and store the result in dst.
- _kxnor_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Compute the bitwise XNOR of 16-bit masks a and b, and store the result in k.
- _kxnor_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise XNOR of 32-bit masks a and b, and store the result in k.
- _kxnor_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise XNOR of 64-bit masks a and b, and store the result in k.
- _kxor_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Bitwise XOR of 8-bit masks a and b, and store the result in dst.
- _kxor_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Compute the bitwise XOR of 16-bit masks a and b, and store the result in k.
- _kxor_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise XOR of 32-bit masks a and b, and store the result in k.
- _kxor_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Compute the bitwise XOR of 64-bit masks a and b, and store the result in k.
- _load_
mask8 ⚠Experimental (x86 or x86-64) and avx512dq
- Load 8-bit mask from memory
- _load_
mask16 ⚠Experimental (x86 or x86-64) and avx512f
- Load 16-bit mask from memory
- _load_
mask32 ⚠Experimental (x86 or x86-64) and avx512bw
- Load 32-bit mask from memory into k.
- _load_
mask64 ⚠Experimental (x86 or x86-64) and avx512bw
- Load 64-bit mask from memory into k.
- _mm256_
abs_ ⚠epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst.
- _mm256_
abs_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Finds the absolute value of each packed half-precision (16-bit) floating-point element in v2, storing the result in dst.
- _mm256_
add_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Add packed half-precision (16-bit) floating-point elements in a and b, and store the results in dst.
- _mm256_
aesdec_ ⚠epi128 Experimental (x86 or x86-64) and vaes
- Performs one round of an AES decryption flow on each 128-bit word (state) in
a
using the corresponding 128-bit word (key) inround_key
. - _mm256_
aesdeclast_ ⚠epi128 Experimental (x86 or x86-64) and vaes
- Performs the last round of an AES decryption flow on each 128-bit word (state) in
a
using the corresponding 128-bit word (key) inround_key
. - _mm256_
aesenc_ ⚠epi128 Experimental (x86 or x86-64) and vaes
- Performs one round of an AES encryption flow on each 128-bit word (state) in
a
using the corresponding 128-bit word (key) inround_key
. - _mm256_
aesenclast_ ⚠epi128 Experimental (x86 or x86-64) and vaes
- Performs the last round of an AES encryption flow on each 128-bit word (state) in
a
using the corresponding 128-bit word (key) inround_key
. - _mm256_
alignr_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 32-bit elements, and store the low 32 bytes (8 elements) in dst.
- _mm256_
alignr_ ⚠epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 64-bit elements, and store the low 32 bytes (4 elements) in dst.
- _mm256_
bcstnebf16_ ⚠ps Experimental (x86 or x86-64) and avxneconvert
- Convert scalar BF16 (16-bit) floating point element stored at memory locations starting at location a to single precision (32-bit) floating-point, broadcast it to packed single precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
bcstnesh_ ⚠ps Experimental (x86 or x86-64) and avxneconvert
- Convert scalar half-precision (16-bit) floating-point element stored at memory locations starting at location a to a single-precision (32-bit) floating-point, broadcast it to packed single-precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
bitshuffle_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512bitalg,avx512vl
- Considers the input
b
as packed 64-bit integers andc
as packed 8-bit integers. Then groups 8 8-bit values fromc
as indices into the bits of the corresponding 64-bit integer. It then selects these bits and packs them into the output. - _mm256_
broadcast_ ⚠f32x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the lower 2 packed single-precision (32-bit) floating-point elements from a to all elements of dst.
- _mm256_
broadcast_ ⚠f32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the 4 packed single-precision (32-bit) floating-point elements from a to all elements of dst.
- _mm256_
broadcast_ ⚠f64x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the 2 packed double-precision (64-bit) floating-point elements from a to all elements of dst.
- _mm256_
broadcast_ ⚠i32x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the lower 2 packed 32-bit integers from a to all elements of dst.
- _mm256_
broadcast_ ⚠i32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the 4 packed 32-bit integers from a to all elements of dst.
- _mm256_
broadcast_ ⚠i64x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the 2 packed 64-bit integers from a to all elements of dst.
- _mm256_
broadcastmb_ ⚠epi64 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Broadcast the low 8-bits from input mask k to all 64-bit elements of dst.
- _mm256_
broadcastmw_ ⚠epi32 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Broadcast the low 16-bits from input mask k to all 32-bit elements of dst.
- _mm256_
castpd_ ⚠ph Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256d
to type__m256h
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
castph128_ ⚠ph256 Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m128h
to type__m256h
. The upper 8 elements of the result are undefined. In practice, the upper elements are zeroed. This intrinsic can generate thevzeroupper
instruction, but most of the time it does not generate any instructions. - _mm256_
castph256_ ⚠ph128 Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256h
to type__m128h
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
castph_ ⚠pd Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256h
to type__m256d
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
castph_ ⚠ps Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256h
to type__m256
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
castph_ ⚠si256 Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256h
to type__m256i
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
castps_ ⚠ph Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256
to type__m256h
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
castsi256_ ⚠ph Experimental (x86 or x86-64) and avx512fp16
- Cast vector of type
__m256i
to type__m256h
. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency. - _mm256_
clmulepi64_ ⚠epi128 Experimental (x86 or x86-64) and vpclmulqdq
- Performs a carry-less multiplication of two 64-bit polynomials over the finite field GF(2) - in each of the 2 128-bit lanes.
- _mm256_
cmp_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠pd_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠ph_ mask Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compare packed half-precision (16-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmp_ ⚠ps_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed 32-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed 64-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpeq_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for equality, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpge_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmpgt_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmple_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmplt_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed 32-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmpneq_ ⚠epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for not-equal, and store the results in mask vector k.
- _mm256_
cmul_ ⚠pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, and
store the results in dst. Each complex number is composed of two adjacent half-precision (16-bit)
floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
conflict_ ⚠epi32 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Test each 32-bit element of a for equality with all other elements in a closer to the least significant bit. Each element’s comparison forms a zero extended bit vector in dst.
- _mm256_
conflict_ ⚠epi64 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Test each 64-bit element of a for equality with all other elements in a closer to the least significant bit. Each element’s comparison forms a zero extended bit vector in dst.
- _mm256_
conj_ ⚠pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compute the complex conjugates of complex numbers in a, and store the results in dst. Each complex number
is composed of two adjacent half-precision (16-bit) floating-point elements, which defines the complex
number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
cvtepi16_ ⚠epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
- _mm256_
cvtepi16_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed signed 16-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepi32_ ⚠epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
- _mm256_
cvtepi32_ ⚠epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the results in dst.
- _mm256_
cvtepi32_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed signed 32-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepi64_ ⚠epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
- _mm256_
cvtepi64_ ⚠epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the results in dst.
- _mm256_
cvtepi64_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the results in dst.
- _mm256_
cvtepi64_ ⚠pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed signed 64-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepi64_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed signed 64-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst. The upper 64 bits of dst are zeroed out.
- _mm256_
cvtepi64_ ⚠ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed signed 64-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepu16_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed unsigned 16-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepu32_ ⚠pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepu32_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed unsigned 32-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepu64_ ⚠pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed unsigned 64-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtepu64_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed unsigned 64-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst. The upper 64 bits of dst are zeroed out.
- _mm256_
cvtepu64_ ⚠ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed unsigned 64-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtne2ps_ ⚠pbh Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in two 256-bit vectors a and b to packed BF16 (16-bit) floating-point elements, and store the results in a 256-bit wide vector. Intel’s documentation
- _mm256_
cvtneebf16_ ⚠ps Experimental (x86 or x86-64) and avxneconvert
- Convert packed BF16 (16-bit) floating-point even-indexed elements stored at memory locations starting at location a to single precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtneeph_ ⚠ps Experimental (x86 or x86-64) and avxneconvert
- Convert packed half-precision (16-bit) floating-point even-indexed elements stored at memory locations starting at location a to single precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtneobf16_ ⚠ps Experimental (x86 or x86-64) and avxneconvert
- Convert packed BF16 (16-bit) floating-point odd-indexed elements stored at memory locations starting at location a to single precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtneoph_ ⚠ps Experimental (x86 or x86-64) and avxneconvert
- Convert packed half-precision (16-bit) floating-point odd-indexed elements stored at memory locations starting at location a to single precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtneps_ ⚠avx_ pbh Experimental (x86 or x86-64) and avxneconvert
- Convert packed single precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtneps_ ⚠pbh Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst. Intel’s documentation
- _mm256_
cvtpbh_ ⚠ps Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Converts packed BF16 (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtpd_ ⚠epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed signed 64-bit integers, and store the results in dst.
- _mm256_
cvtpd_ ⚠epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
- _mm256_
cvtpd_ ⚠epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 64-bit integers, and store the results in dst.
- _mm256_
cvtpd_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst. The upper 64 bits of dst are zeroed out.
- _mm256_
cvtph_ ⚠epi16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 16-bit integers, and store the results in dst.
- _mm256_
cvtph_ ⚠epi32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.
- _mm256_
cvtph_ ⚠epi64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit integers, and store the results in dst.
- _mm256_
cvtph_ ⚠epu16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed unsigned 16-bit integers, and store the results in dst.
- _mm256_
cvtph_ ⚠epu32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit unsigned integers, and store the results in dst.
- _mm256_
cvtph_ ⚠epu64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit unsigned integers, and store the results in dst.
- _mm256_
cvtph_ ⚠pd Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtps_ ⚠epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed signed 64-bit integers, and store the results in dst.
- _mm256_
cvtps_ ⚠epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
- _mm256_
cvtps_ ⚠epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 64-bit integers, and store the results in dst.
- _mm256_
cvtsepi16_ ⚠epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
- _mm256_
cvtsepi32_ ⚠epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
- _mm256_
cvtsepi32_ ⚠epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
- _mm256_
cvtsepi64_ ⚠epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
- _mm256_
cvtsepi64_ ⚠epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
- _mm256_
cvtsepi64_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the results in dst.
- _mm256_
cvtsh_ ⚠h Experimental (x86 or x86-64) and avx512fp16
- Copy the lower half-precision (16-bit) floating-point element from
a
todst
. - _mm256_
cvttpd_ ⚠epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed signed 64-bit integers with truncation, and store the result in dst.
- _mm256_
cvttpd_ ⚠epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
- _mm256_
cvttpd_ ⚠epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 64-bit integers with truncation, and store the result in dst.
- _mm256_
cvttph_ ⚠epi16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 16-bit integers with truncation, and store the results in dst.
- _mm256_
cvttph_ ⚠epi32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.
- _mm256_
cvttph_ ⚠epi64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit integers with truncation, and store the results in dst.
- _mm256_
cvttph_ ⚠epu16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed unsigned 16-bit integers with truncation, and store the results in dst.
- _mm256_
cvttph_ ⚠epu32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit unsigned integers with truncation, and store the results in dst.
- _mm256_
cvttph_ ⚠epu64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit unsigned integers with truncation, and store the results in dst.
- _mm256_
cvttps_ ⚠epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed signed 64-bit integers with truncation, and store the result in dst.
- _mm256_
cvttps_ ⚠epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
- _mm256_
cvttps_ ⚠epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 64-bit integers with truncation, and store the result in dst.
- _mm256_
cvtusepi16_ ⚠epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
- _mm256_
cvtusepi32_ ⚠epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
- _mm256_
cvtusepi32_ ⚠epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst.
- _mm256_
cvtusepi64_ ⚠epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
- _mm256_
cvtusepi64_ ⚠epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst.
- _mm256_
cvtusepi64_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed unsigned 32-bit integers with unsigned saturation, and store the results in dst.
- _mm256_
cvtxph_ ⚠ps Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
- _mm256_
cvtxps_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
- _mm256_
dbsad_ ⚠epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst. Four SADs are performed on four 8-bit quadruplets for each 64-bit lane. The first two SADs use the lower 8-bit quadruplet of the lane from a, and the last two SADs use the uppper 8-bit quadruplet of the lane from a. Quadruplets from b are selected from within 128-bit lanes according to the control in imm8, and each SAD in each 64-bit lane uses the selected quadruplet at 8-bit offsets.
- _mm256_
div_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Divide packed half-precision (16-bit) floating-point elements in a by b, and store the results in dst.
- _mm256_
dpbf16_ ⚠ps Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst. Intel’s documentation
- _mm256_
dpbssd_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint8
- Multiply groups of 4 adjacent pairs of signed 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpbssds_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint8
- Multiply groups of 4 adjacent pairs of signed 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src with signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpbsud_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint8
- Multiply groups of 4 adjacent pairs of signed 8-bit integers in a with corresponding unsigned 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpbsuds_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint8
- Multiply groups of 4 adjacent pairs of signed 8-bit integers in a with corresponding unsigned 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src with signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpbusd_ ⚠avx_ epi32 Experimental (x86 or x86-64) and avxvnni
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpbusd_ ⚠epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpbusds_ ⚠avx_ epi32 Experimental (x86 or x86-64) and avxvnni
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpbusds_ ⚠epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpbuud_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint8
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding unsigned 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpbuuds_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint8
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding unsigned 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src with signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpwssd_ ⚠avx_ epi32 Experimental (x86 or x86-64) and avxvnni
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpwssd_ ⚠epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpwssds_ ⚠avx_ epi32 Experimental (x86 or x86-64) and avxvnni
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpwssds_ ⚠epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpwsud_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint16
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding unsigned 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpwsuds_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint16
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding unsigned 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src with signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpwusd_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint16
- Multiply groups of 2 adjacent pairs of unsigned 16-bit integers in a with corresponding signed 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpwusds_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint16
- Multiply groups of 2 adjacent pairs of unsigned 16-bit integers in a with corresponding signed 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src with signed saturation, and store the packed 32-bit results in dst.
- _mm256_
dpwuud_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint16
- Multiply groups of 2 adjacent pairs of unsigned 16-bit integers in a with corresponding unsigned 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
- _mm256_
dpwuuds_ ⚠epi32 Experimental (x86 or x86-64) and avxvnniint16
- Multiply groups of 2 adjacent pairs of unsigned 16-bit integers in a with corresponding unsigned 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src with signed saturation, and store the packed 32-bit results in dst.
- _mm256_
extractf32x4_ ⚠ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the result in dst.
- _mm256_
extractf64x2_ ⚠pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Extracts 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a, selected with IMM8, and stores the result in dst.
- _mm256_
extracti32x4_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Extract 128 bits (composed of 4 packed 32-bit integers) from a, selected with IMM1, and store the result in dst.
- _mm256_
extracti64x2_ ⚠epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Extracts 128 bits (composed of 2 packed 64-bit integers) from a, selected with IMM8, and stores the result in dst.
- _mm256_
fcmadd_ ⚠pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, accumulate
to the corresponding complex numbers in c, and store the results in dst. Each complex number is composed
of two adjacent half-precision (16-bit) floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
fcmul_ ⚠pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, and
store the results in dst. Each complex number is composed of two adjacent half-precision (16-bit)
floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
fixupimm_ ⚠pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
- _mm256_
fixupimm_ ⚠ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
- _mm256_
fmadd_ ⚠pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a and b, accumulate to the corresponding complex numbers in c,
and store the results in dst. Each complex number is composed of two adjacent half-precision (16-bit)
floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
. - _mm256_
fmadd_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.
- _mm256_
fmaddsub_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.
- _mm256_
fmsub_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.
- _mm256_
fmsubadd_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c to/from the intermediate result, and store the results in dst.
- _mm256_
fmul_ ⚠pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a and b, and store the results in dst. Each complex number is
composed of two adjacent half-precision (16-bit) floating-point elements, which defines the complex
number
complex = vec.fp16[0] + i * vec.fp16[1]
. - _mm256_
fnmadd_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract the intermediate result from packed elements in c, and store the results in dst.
- _mm256_
fnmsub_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.
- _mm256_
fpclass_ ⚠pd_ mask Experimental (x86 or x86-64) and avx512dq,avx512vl
- Test packed double-precision (64-bit) floating-point elements in a for special categories specified by imm8, and store the results in mask vector k. imm can be a combination of:
- _mm256_
fpclass_ ⚠ph_ mask Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Test packed half-precision (16-bit) floating-point elements in a for special categories specified by imm8, and store the results in mask vector k. imm can be a combination of:
- _mm256_
fpclass_ ⚠ps_ mask Experimental (x86 or x86-64) and avx512dq,avx512vl
- Test packed single-precision (32-bit) floating-point elements in a for special categories specified by imm8, and store the results in mask vector k. imm can be a combination of:
- _mm256_
getexp_ ⚠pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
- _mm256_
getexp_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert the exponent of each packed half-precision (16-bit) floating-point element in a to a half-precision
(16-bit) floating-point number representing the integer exponent, and store the results in dst.
This intrinsic essentially calculates
floor(log2(x))
for each element. - _mm256_
getexp_ ⚠ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
- _mm256_
getmant_ ⚠pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 - _mm256_
getmant_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Normalize the mantissas of packed half-precision (16-bit) floating-point elements in a, and store
the results in dst. This intrinsic essentially calculates
±(2^k)*|x.significand|
, where k depends on the interval range defined by norm and the sign depends on sign and the source sign. - _mm256_
getmant_ ⚠ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
- _mm256_
gf2p8affine_ ⚠epi64_ epi8 Experimental (x86 or x86-64) and gfni,avx
- Performs an affine transformation on the packed bytes in x. That is computes a*x+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
- _mm256_
gf2p8affineinv_ ⚠epi64_ epi8 Experimental (x86 or x86-64) and gfni,avx
- Performs an affine transformation on the inverted packed bytes in x. That is computes a*inv(x)+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. The inverse of a byte is defined with respect to the reduction polynomial x^8+x^4+x^3+x+1. The inverse of 0 is 0. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
- _mm256_
gf2p8mul_ ⚠epi8 Experimental (x86 or x86-64) and gfni,avx
- Performs a multiplication in GF(2^8) on the packed bytes. The field is in polynomial representation with the reduction polynomial x^8 + x^4 + x^3 + x + 1.
- _mm256_
i32scatter_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 8 32-bit integer elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale
- _mm256_
i32scatter_ ⚠epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Scatter 64-bit integers from a into memory using 32-bit indices. 64-bit elements are stored at addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
- _mm256_
i32scatter_ ⚠pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 double-precision (64-bit) floating-point elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale
- _mm256_
i32scatter_ ⚠ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 8 single-precision (32-bit) floating-point elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale
- _mm256_
i64scatter_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 32-bit integer elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale
- _mm256_
i64scatter_ ⚠epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 64-bit integer elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale
- _mm256_
i64scatter_ ⚠pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 double-precision (64-bit) floating-point elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale
- _mm256_
i64scatter_ ⚠ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 single-precision (32-bit) floating-point elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale
- _mm256_
insertf32x4 ⚠Experimental (x86 or x86-64) and avx512f,avx512vl
- Copy a to dst, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into dst at the location specified by imm8.
- _mm256_
insertf64x2 ⚠Experimental (x86 or x86-64) and avx512dq,avx512vl
- Copy a to dst, then insert 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from b into dst at the location specified by IMM8.
- _mm256_
inserti32x4 ⚠Experimental (x86 or x86-64) and avx512f,avx512vl
- Copy a to dst, then insert 128 bits (composed of 4 packed 32-bit integers) from b into dst at the location specified by imm8.
- _mm256_
inserti64x2 ⚠Experimental (x86 or x86-64) and avx512dq,avx512vl
- Copy a to dst, then insert 128 bits (composed of 2 packed 64-bit integers) from b into dst at the location specified by IMM8.
- _mm256_
load_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load 256-bits (composed of 8 packed 32-bit integers) from memory into dst. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
- _mm256_
load_ ⚠epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load 256-bits (composed of 4 packed 64-bit integers) from memory into dst. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
- _mm256_
load_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Load 256-bits (composed of 16 packed half-precision (16-bit) floating-point elements) from memory into a new vector. The address must be aligned to 32 bytes or a general-protection exception may be generated.
- _mm256_
loadu_ ⚠epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Load 256-bits (composed of 32 packed 8-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
- _mm256_
loadu_ ⚠epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Load 256-bits (composed of 16 packed 16-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
- _mm256_
loadu_ ⚠epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load 256-bits (composed of 8 packed 32-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
- _mm256_
loadu_ ⚠epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load 256-bits (composed of 4 packed 64-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
- _mm256_
loadu_ ⚠ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Load 256-bits (composed of 16 packed half-precision (16-bit) floating-point elements) from memory into a new vector. The address does not need to be aligned to any particular boundary.
- _mm256_
lzcnt_ ⚠epi32 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Counts the number of leading zero bits in each packed 32-bit integer in a, and store the results in dst.
- _mm256_
lzcnt_ ⚠epi64 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Counts the number of leading zero bits in each packed 64-bit integer in a, and store the results in dst.
- _mm256_
madd52hi_ ⚠avx_ epu64 Experimental (x86 or x86-64) and avxifma
- Multiply packed unsigned 52-bit integers in each 64-bit element of
b
andc
to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer ina
, and store the results indst
. - _mm256_
madd52hi_ ⚠epu64 Experimental (x86 or x86-64) and avx512ifma,avx512vl
- Multiply packed unsigned 52-bit integers in each 64-bit element of
b
andc
to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer ina
, and store the results indst
. - _mm256_
madd52lo_ ⚠avx_ epu64 Experimental (x86 or x86-64) and avxifma
- Multiply packed unsigned 52-bit integers in each 64-bit element of
b
andc
to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer ina
, and store the results indst
. - _mm256_
madd52lo_ ⚠epu64 Experimental (x86 or x86-64) and avx512ifma,avx512vl
- Multiply packed unsigned 52-bit integers in each 64-bit element of
b
andc
to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer ina
, and store the results indst
. - _mm256_
mask2_ ⚠permutex2var_ epi8 Experimental (x86 or x86-64) and avx512vbmi,avx512vl
- Shuffle 8-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask2_ ⚠permutex2var_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shuffle 16-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
- _mm256_
mask2_ ⚠permutex2var_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
- _mm256_
mask2_ ⚠permutex2var_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
- _mm256_
mask2_ ⚠permutex2var_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set)
- _mm256_
mask2_ ⚠permutex2var_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fcmadd_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, accumulate
to the corresponding complex numbers in c, and store the results in dst using writemask k (the element is
copied from c when the corresponding mask bit is not set). Each complex number is composed of two adjacent
half-precision (16-bit) floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
mask3_ ⚠fmadd_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a and b, accumulate to the corresponding complex numbers in c,
and store the results in dst using writemask k (the element is copied from c when the corresponding
mask bit is not set). Each complex number is composed of two adjacent half-precision (16-bit)
floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
. - _mm256_
mask3_ ⚠fmadd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmadd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (the element is copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmadd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmaddsub_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmaddsub_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (the element is copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmaddsub_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmsub_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmsub_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (the element is copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmsub_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmsubadd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmsubadd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c to/from the intermediate result, and store the results in dst using writemask k (the element is copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fmsubadd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fnmadd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fnmadd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract the intermediate result from packed elements in c, and store the results in dst using writemask k (the element is copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fnmadd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fnmsub_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fnmsub_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (the element is copied from c when the corresponding mask bit is not set).
- _mm256_
mask3_ ⚠fnmsub_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠abs_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compute the absolute value of packed signed 8-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠abs_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compute the absolute value of packed signed 16-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠abs_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the absolute value of packed signed 32-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠abs_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Add packed 8-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Add packed 16-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Add packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Add packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Add packed half-precision (16-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠add_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠adds_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Add packed signed 8-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠adds_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Add packed signed 16-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠adds_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Add packed unsigned 8-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠adds_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Add packed unsigned 16-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠alignr_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Concatenate pairs of 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by imm8 bytes, and store the low 16 bytes in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠alignr_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 32-bit elements, and store the low 32 bytes (8 elements) in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠alignr_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 64-bit elements, and store the low 32 bytes (4 elements) in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠and_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Performs element-by-element bitwise AND between packed 32-bit integer elements of a and b, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠and_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the bitwise AND of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠and_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Compute the bitwise AND of packed double-precision (64-bit) floating point numbers in a and b and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠and_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Compute the bitwise AND of packed single-precision (32-bit) floating point numbers in a and b and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠andnot_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the bitwise NOT of packed 32-bit integers in a and then AND with b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠andnot_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the bitwise NOT of packed 64-bit integers in a and then AND with b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠andnot_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Compute the bitwise NOT of packed double-precision (64-bit) floating point numbers in a and then bitwise AND with b and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠andnot_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Compute the bitwise NOT of packed single-precision (32-bit) floating point numbers in a and then bitwise AND with b and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠avg_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Average packed unsigned 8-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠avg_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Average packed unsigned 16-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠bitshuffle_ epi64_ mask Experimental (x86 or x86-64) and avx512bitalg,avx512vl
- Considers the input
b
as packed 64-bit integers andc
as packed 8-bit integers. Then groups 8 8-bit values fromc
as indices into the bits of the corresponding 64-bit integer. It then selects these bits and packs them into the output. - _mm256_
mask_ ⚠blend_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Blend packed 8-bit integers from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠blend_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Blend packed 16-bit integers from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠blend_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Blend packed 32-bit integers from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠blend_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Blend packed 64-bit integers from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠blend_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Blend packed double-precision (64-bit) floating-point elements from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠blend_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Blend packed half-precision (16-bit) floating-point elements from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠blend_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Blend packed single-precision (32-bit) floating-point elements from a and b using control mask k, and store the results in dst.
- _mm256_
mask_ ⚠broadcast_ f32x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the lower 2 packed single-precision (32-bit) floating-point elements from a to all elements of dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠broadcast_ f32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the 4 packed single-precision (32-bit) floating-point elements from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcast_ f64x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the 2 packed double-precision (64-bit) floating-point elements from a to all elements of dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠broadcast_ i32x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the lower 2 packed 32-bit integers from a to all elements of dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠broadcast_ i32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the 4 packed 32-bit integers from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcast_ i64x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Broadcasts the 2 packed 64-bit integers from a to all elements of dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠broadcastb_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Broadcast the low packed 8-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcastd_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the low packed 32-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcastq_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the low packed 64-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcastsd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the low double-precision (64-bit) floating-point element from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcastss_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast the low single-precision (32-bit) floating-point element from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠broadcastw_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Broadcast the low packed 16-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ pd_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ ph_ mask Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compare packed half-precision (16-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmp_ ps_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed 32-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed 64-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpeq_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpge_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpgt_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmple_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmplt_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epi8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epi16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epi32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed 32-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epi64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epu8_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epu16_ mask Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epu32_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmpneq_ epu64_ mask Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cmul_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, and
store the results in dst using writemask k (the element is copied from src when corresponding mask bit is not set).
Each complex number is composed of two adjacent half-precision (16-bit) floating-point elements, which
defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
mask_ ⚠compress_ epi8 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Contiguously store the active 8-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
- _mm256_
mask_ ⚠compress_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Contiguously store the active 16-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
- _mm256_
mask_ ⚠compress_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active 32-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
- _mm256_
mask_ ⚠compress_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active 64-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
- _mm256_
mask_ ⚠compress_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active double-precision (64-bit) floating-point elements in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
- _mm256_
mask_ ⚠compress_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active single-precision (32-bit) floating-point elements in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
- _mm256_
mask_ ⚠compressstoreu_ epi8 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Contiguously store the active 8-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠compressstoreu_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Contiguously store the active 16-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠compressstoreu_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active 32-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠compressstoreu_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active 64-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠compressstoreu_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active double-precision (64-bit) floating-point elements in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠compressstoreu_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Contiguously store the active single-precision (32-bit) floating-point elements in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠conflict_ epi32 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Test each 32-bit element of a for equality with all other elements in a closer to the least significant bit using writemask k (elements are copied from src when the corresponding mask bit is not set). Each element’s comparison forms a zero extended bit vector in dst.
- _mm256_
mask_ ⚠conflict_ epi64 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Test each 64-bit element of a for equality with all other elements in a closer to the least significant bit using writemask k (elements are copied from src when the corresponding mask bit is not set). Each element’s comparison forms a zero extended bit vector in dst.
- _mm256_
mask_ ⚠conj_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compute the complex conjugates of complex numbers in a, and store the results in dst using writemask k
(the element is copied from src when corresponding mask bit is not set). Each complex number is composed of two
adjacent half-precision (16-bit) floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
mask_ ⚠cvt_ roundps_ ph Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of: - _mm256_
mask_ ⚠cvtepi8_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Sign extend packed 8-bit integers in a to packed 16-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi8_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Sign extend packed 8-bit integers in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi8_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Sign extend packed 8-bit integers in the low 4 bytes of a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi16_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi16_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Sign extend packed 16-bit integers in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi16_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Sign extend packed 16-bit integers in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi16_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed signed 16-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi16_ storeu_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtepi32_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi32_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi32_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Sign extend packed 32-bit integers in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi32_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi32_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed signed 32-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi32_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi32_ storeu_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtepi32_ storeu_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtepi64_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi64_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi64_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepi64_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed signed 64-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtepi64_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed signed 64-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set). The upper 64 bits of dst are zeroed out.
- _mm256_
mask_ ⚠cvtepi64_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed signed 64-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtepi64_ storeu_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtepi64_ storeu_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtepi64_ storeu_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtepu8_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Zero extend packed unsigned 8-bit integers in a to packed 16-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu8_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Zero extend packed unsigned 8-bit integers in the low 8 bytes of a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu8_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Zero extend packed unsigned 8-bit integers in the low 4 bytes of a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu16_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu16_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Zero extend packed unsigned 16-bit integers in the low 8 bytes of a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu16_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed unsigned 16-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu32_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Zero extend packed unsigned 32-bit integers in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu32_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu32_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed unsigned 32-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtepu64_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed unsigned 64-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtepu64_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed unsigned 64-bit integers in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set). The upper 64 bits of dst are zeroed out.
- _mm256_
mask_ ⚠cvtepu64_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed unsigned 64-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtne2ps_ pbh Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in two vectors a and b to packed BF16 (16-bit) floating-point elements and store the results in single vector dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Intel’s documentation
- _mm256_
mask_ ⚠cvtneps_ pbh Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Intel’s documentation
- _mm256_
mask_ ⚠cvtpbh_ ps Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Converts packed BF16 (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtpd_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtpd_ epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed signed 64-bit integers, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtpd_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtpd_ epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 64-bit integers, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtpd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set). The upper 64 bits of dst are zeroed out.
- _mm256_
mask_ ⚠cvtpd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ epi16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 16-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ epi32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ epi64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ epu16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed unsigned 16-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ epu32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit unsigned integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ epu64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit unsigned integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ pd Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtph_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtps_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtps_ epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed signed 64-bit integers, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtps_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtps_ epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 64-bit integers, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtps_ ph Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:\ - _mm256_
mask_ ⚠cvtsepi16_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtsepi16_ storeu_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtsepi32_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtsepi32_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtsepi32_ storeu_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtsepi32_ storeu_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtsepi64_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtsepi64_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtsepi64_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtsepi64_ storeu_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtsepi64_ storeu_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtsepi64_ storeu_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvttpd_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttpd_ epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed signed 64-bit integers with truncation, and store the result in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvttpd_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttpd_ epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 64-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvttph_ epi16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 16-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttph_ epi32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttph_ epi64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttph_ epu16 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed unsigned 16-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttph_ epu32 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 32-bit unsigned integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttph_ epu64 Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed 64-bit unsigned integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttps_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttps_ epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed signed 64-bit integers with truncation, and store the result in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvttps_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvttps_ epu64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 64-bit integers with truncation, and store the result in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠cvtusepi16_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtusepi16_ storeu_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtusepi32_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtusepi32_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtusepi32_ storeu_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed 8-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtusepi32_ storeu_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtusepi64_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtusepi64_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtusepi64_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed unsigned 32-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtusepi64_ storeu_ epi8 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed 8-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtusepi64_ storeu_ epi16 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed 16-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtusepi64_ storeu_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert packed unsigned 64-bit integers in a to packed 32-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
- _mm256_
mask_ ⚠cvtxph_ ps Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠cvtxps_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src to dst when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠dbsad_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Four SADs are performed on four 8-bit quadruplets for each 64-bit lane. The first two SADs use the lower 8-bit quadruplet of the lane from a, and the last two SADs use the uppper 8-bit quadruplet of the lane from a. Quadruplets from b are selected from within 128-bit lanes according to the control in imm8, and each SAD in each 64-bit lane uses the selected quadruplet at 8-bit offsets.
- _mm256_
mask_ ⚠div_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠div_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Divide packed half-precision (16-bit) floating-point elements in a by b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠div_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠dpbf16_ ps Experimental (x86 or x86-64) and avx512bf16,avx512vl
- Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Intel’s documentation
- _mm256_
mask_ ⚠dpbusd_ epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠dpbusds_ epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠dpwssd_ epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠dpwssds_ epi32 Experimental (x86 or x86-64) and avx512vnni,avx512vl
- Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expand_ epi8 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Load contiguous active 8-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expand_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Load contiguous active 16-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expand_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active 32-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expand_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active 64-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expand_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active double-precision (64-bit) floating-point elements from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expand_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active single-precision (32-bit) floating-point elements from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expandloadu_ epi8 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Load contiguous active 8-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expandloadu_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Load contiguous active 16-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expandloadu_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active 32-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expandloadu_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active 64-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expandloadu_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active double-precision (64-bit) floating-point elements from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠expandloadu_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Load contiguous active single-precision (32-bit) floating-point elements from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠extractf32x4_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠extractf64x2_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Extracts 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a, selected with IMM8, and stores the result in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠extracti32x4_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Extract 128 bits (composed of 4 packed 32-bit integers) from a, selected with IMM1, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠extracti64x2_ epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Extracts 128 bits (composed of 2 packed 64-bit integers) from a, selected with IMM8, and stores the result in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠fcmadd_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, accumulate
to the corresponding complex numbers in c, and store the results in dst using writemask k (the element is
copied from a when the corresponding mask bit is not set). Each complex number is composed of two adjacent
half-precision (16-bit) floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
mask_ ⚠fcmul_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a by the complex conjugates of packed complex numbers in b, and
store the results in dst using writemask k (the element is copied from src when corresponding mask bit is not set).
Each complex number is composed of two adjacent half-precision (16-bit) floating-point elements, which
defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
, or the complex conjugateconjugate = vec.fp16[0] - i * vec.fp16[1]
. - _mm256_
mask_ ⚠fixupimm_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). imm8 is used to set the required flags reporting.
- _mm256_
mask_ ⚠fixupimm_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). imm8 is used to set the required flags reporting.
- _mm256_
mask_ ⚠fmadd_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a and b, accumulate to the corresponding complex numbers in c,
and store the results in dst using writemask k (the element is copied from a when the corresponding mask
bit is not set). Each complex number is composed of two adjacent half-precision (16-bit) floating-point
elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
. - _mm256_
mask_ ⚠fmadd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmadd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (the element is copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmadd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmaddsub_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmaddsub_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (the element is copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmaddsub_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmsub_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmsub_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (the element is copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmsub_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmsubadd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmsubadd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c to/from the intermediate result, and store the results in dst using writemask k (the element is copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmsubadd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fmul_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a and b, and store the results in dst using writemask k (the element
is copied from src when corresponding mask bit is not set). Each complex number is composed of two adjacent half-precision
(16-bit) floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
. - _mm256_
mask_ ⚠fnmadd_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fnmadd_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract the intermediate result from packed elements in c, and store the results in dst using writemask k (the element is copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fnmadd_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fnmsub_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fnmsub_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (the element is copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fnmsub_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠fpclass_ pd_ mask Experimental (x86 or x86-64) and avx512dq,avx512vl
- Test packed double-precision (64-bit) floating-point elements in a for special categories specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set). imm can be a combination of:
- _mm256_
mask_ ⚠fpclass_ ph_ mask Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Test packed half-precision (16-bit) floating-point elements in a for special categories specified by imm8, and store the results in mask vector k using zeromask k (elements are zeroed out when the corresponding mask bit is not set). imm can be a combination of:
- _mm256_
mask_ ⚠fpclass_ ps_ mask Experimental (x86 or x86-64) and avx512dq,avx512vl
- Test packed single-precision (32-bit) floating-point elements in a for special categories specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set). imm can be a combination of:
- _mm256_
mask_ ⚠getexp_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.
- _mm256_
mask_ ⚠getexp_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Convert the exponent of each packed half-precision (16-bit) floating-point element in a to a half-precision
(16-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k
(elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates
floor(log2(x))
for each element. - _mm256_
mask_ ⚠getexp_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.
- _mm256_
mask_ ⚠getmant_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 - _mm256_
mask_ ⚠getmant_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Normalize the mantissas of packed half-precision (16-bit) floating-point elements in a, and store
the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
This intrinsic essentially calculates
±(2^k)*|x.significand|
, where k depends on the interval range defined by norm and the sign depends on sign and the source sign. - _mm256_
mask_ ⚠getmant_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 - _mm256_
mask_ ⚠gf2p8affine_ epi64_ epi8 Experimental (x86 or x86-64) and gfni,avx512bw,avx512vl
- Performs an affine transformation on the packed bytes in x. That is computes a*x+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
- _mm256_
mask_ ⚠gf2p8affineinv_ epi64_ epi8 Experimental (x86 or x86-64) and gfni,avx512bw,avx512vl
- Performs an affine transformation on the inverted packed bytes in x. That is computes a*inv(x)+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. The inverse of a byte is defined with respect to the reduction polynomial x^8+x^4+x^3+x+1. The inverse of 0 is 0. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
- _mm256_
mask_ ⚠gf2p8mul_ epi8 Experimental (x86 or x86-64) and gfni,avx512bw,avx512vl
- Performs a multiplication in GF(2^8) on the packed bytes. The field is in polynomial representation with the reduction polynomial x^8 + x^4 + x^3 + x + 1.
- _mm256_
mask_ ⚠i32scatter_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 8 32-bit integer elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i32scatter_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 64-bit integer elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i32scatter_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 double-precision (64-bit) floating-point elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i32scatter_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 8 single-precision (32-bit) floating-point elements from a to memory starting at location base_addr at packed 32-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i64scatter_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 32-bit integer elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i64scatter_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 64-bit integer elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i64scatter_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 double-precision (64-bit) floating-point elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠i64scatter_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Stores 4 single-precision (32-bit) floating-point elements from a to memory starting at location base_addr at packed 64-bit integer indices stored in vindex scaled by scale using writemask k (elements whose corresponding mask bit is not set are not written to memory).
- _mm256_
mask_ ⚠insertf32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Copy a to tmp, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into tmp at the location specified by imm8. Store tmp to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠insertf64x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Copy a to tmp, then insert 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from b into tmp at the location specified by IMM8, and copy tmp to dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠inserti32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Copy a to tmp, then insert 128 bits (composed of 4 packed 32-bit integers) from b into tmp at the location specified by imm8. Store tmp to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠inserti64x2 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Copy a to tmp, then insert 128 bits (composed of 2 packed 64-bit integers) from b into tmp at the location specified by IMM8, and copy tmp to dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠load_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed 32-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
- _mm256_
mask_ ⚠load_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed 64-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
- _mm256_
mask_ ⚠load_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed double-precision (64-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
- _mm256_
mask_ ⚠load_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed single-precision (32-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
- _mm256_
mask_ ⚠loadu_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Load packed 8-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
- _mm256_
mask_ ⚠loadu_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Load packed 16-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
- _mm256_
mask_ ⚠loadu_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed 32-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
- _mm256_
mask_ ⚠loadu_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed 64-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
- _mm256_
mask_ ⚠loadu_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed double-precision (64-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
- _mm256_
mask_ ⚠loadu_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Load packed single-precision (32-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
- _mm256_
mask_ ⚠lzcnt_ epi32 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Counts the number of leading zero bits in each packed 32-bit integer in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠lzcnt_ epi64 Experimental (x86 or x86-64) and avx512cd,avx512vl
- Counts the number of leading zero bits in each packed 64-bit integer in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠madd52hi_ epu64 Experimental (x86 or x86-64) and avx512ifma,avx512vl
- Multiply packed unsigned 52-bit integers in each 64-bit element of
b
andc
to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer ina
, and store the results indst
using writemaskk
(elements are copied fromk
when the corresponding mask bit is not set). - _mm256_
mask_ ⚠madd52lo_ epu64 Experimental (x86 or x86-64) and avx512ifma,avx512vl
- Multiply packed unsigned 52-bit integers in each 64-bit element of
b
andc
to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer ina
, and store the results indst
using writemaskk
(elements are copied fromk
when the corresponding mask bit is not set). - _mm256_
mask_ ⚠madd_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers, and pack the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠maddubs_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Multiply packed unsigned 8-bit integers in a by packed signed 8-bit integers in b, producing intermediate signed 16-bit integers. Horizontally add adjacent pairs of intermediate signed 16-bit integers, and pack the saturated results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ epu64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠max_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compare packed half-precision (16-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Does not follow the IEEE Standard for Floating-Point Arithmetic (IEEE 754) maximum value when inputs are NaN or signed-zero values.
- _mm256_
mask_ ⚠max_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 8-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed signed 16-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 8-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Compare packed unsigned 16-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ epu64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠min_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compare packed half-precision (16-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Does not follow the IEEE Standard for Floating-Point Arithmetic (IEEE 754) minimum value when inputs are NaN or signed-zero values.
- _mm256_
mask_ ⚠min_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mov_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Move packed 8-bit integers from a into dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mov_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Move packed 16-bit integers from a into dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mov_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Move packed 32-bit integers from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mov_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Move packed 64-bit integers from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mov_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Move packed double-precision (64-bit) floating-point elements from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mov_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Move packed single-precision (32-bit) floating-point elements from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠movedup_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠movehdup_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠moveldup_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mul_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mul_ epu32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mul_ pch Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed complex numbers in a and b, and store the results in dst using writemask k (the element
is copied from src when corresponding mask bit is not set). Each complex number is composed of two adjacent
half-precision (16-bit) floating-point elements, which defines the complex number
complex = vec.fp16[0] + i * vec.fp16[1]
. - _mm256_
mask_ ⚠mul_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mul_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Multiply packed half-precision (16-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mul_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mulhi_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Multiply the packed signed 16-bit integers in a and b, producing intermediate 32-bit integers, and store the high 16 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mulhi_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Multiply the packed unsigned 16-bit integers in a and b, producing intermediate 32-bit integers, and store the high 16 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mulhrs_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Truncate each intermediate integer to the 18 most significant bits, round by adding 1, and store bits [16:1] to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mullo_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit integers, and store the low 16 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mullo_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠mullo_ epi64 Experimental (x86 or x86-64) and avx512dq,avx512vl
- Multiply packed 64-bit integers in
a
andb
, producing intermediate 128-bit integers, and store the low 64 bits of the intermediate integers indst
using writemaskk
(elements are copied fromsrc
if the corresponding bit is not set). - _mm256_
mask_ ⚠multishift_ epi64_ epi8 Experimental (x86 or x86-64) and avx512vbmi,avx512vl
- For each 64-bit element in b, select 8 unaligned bytes using a byte-granular shift control within the corresponding 64-bit element of a, and store the 8 assembled bytes to the corresponding 64-bit element of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠or_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠or_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠or_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Compute the bitwise OR of packed double-precision (64-bit) floating point numbers in a and b and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠or_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Compute the bitwise OR of packed single-precision (32-bit) floating point numbers in a and b and store the results in dst using writemask k (elements are copied from src if the corresponding bit is not set).
- _mm256_
mask_ ⚠packs_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 16-bit integers from a and b to packed 8-bit integers using signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠packs_ epi32 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 32-bit integers from a and b to packed 16-bit integers using signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠packus_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠packus_ epi32 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Convert packed signed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permute_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permute_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutevar_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutevar_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex2var_ epi8 Experimental (x86 or x86-64) and avx512vbmi,avx512vl
- Shuffle 8-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex2var_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shuffle 16-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex2var_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex2var_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex2var_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex2var_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutex_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutexvar_ epi8 Experimental (x86 or x86-64) and avx512vbmi,avx512vl
- Shuffle 8-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutexvar_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shuffle 16-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutexvar_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutexvar_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutexvar_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠permutexvar_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠popcnt_ epi8 Experimental (x86 or x86-64) and avx512bitalg,avx512vl
- For each packed 8-bit integer maps the value to the number of logical 1 bits.
- _mm256_
mask_ ⚠popcnt_ epi16 Experimental (x86 or x86-64) and avx512bitalg,avx512vl
- For each packed 16-bit integer maps the value to the number of logical 1 bits.
- _mm256_
mask_ ⚠popcnt_ epi32 Experimental (x86 or x86-64) and avx512vpopcntdq,avx512vl
- For each packed 32-bit integer maps the value to the number of logical 1 bits.
- _mm256_
mask_ ⚠popcnt_ epi64 Experimental (x86 or x86-64) and avx512vpopcntdq,avx512vl
- For each packed 64-bit integer maps the value to the number of logical 1 bits.
- _mm256_
mask_ ⚠range_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Calculate the max, min, absolute max, or absolute min (depending on control in imm8) for packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src to dst if the corresponding mask bit is not set). Lower 2 bits of IMM8 specifies the operation control: 00 = min, 01 = max, 10 = absolute min, 11 = absolute max. Upper 2 bits of IMM8 specifies the sign control: 00 = sign from a, 01 = sign from compare result, 10 = clear sign bit, 11 = set sign bit.
- _mm256_
mask_ ⚠range_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Calculate the max, min, absolute max, or absolute min (depending on control in imm8) for packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src to dst if the corresponding mask bit is not set). Lower 2 bits of IMM8 specifies the operation control: 00 = min, 01 = max, 10 = absolute min, 11 = absolute max. Upper 2 bits of IMM8 specifies the sign control: 00 = sign from a, 01 = sign from compare result, 10 = clear sign bit, 11 = set sign bit.
- _mm256_
mask_ ⚠rcp14_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
- _mm256_
mask_ ⚠rcp14_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
- _mm256_
mask_ ⚠rcp_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compute the approximate reciprocal of packed 16-bit floating-point elements in
a
and stores the results indst
using writemaskk
(elements are copied fromsrc
when the corresponding mask bit is not set). The maximum relative error for this approximation is less than1.5*2^-12
. - _mm256_
mask_ ⚠reduce_ add_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 8-bit integers in a by addition using mask k. Returns the sum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ add_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 16-bit integers in a by addition using mask k. Returns the sum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ and_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 8-bit integers in a by bitwise AND using mask k. Returns the bitwise AND of all active elements in a.
- _mm256_
mask_ ⚠reduce_ and_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 16-bit integers in a by bitwise AND using mask k. Returns the bitwise AND of all active elements in a.
- _mm256_
mask_ ⚠reduce_ max_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 8-bit integers in a by maximum using mask k. Returns the maximum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ max_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 16-bit integers in a by maximum using mask k. Returns the maximum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ max_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed unsigned 8-bit integers in a by maximum using mask k. Returns the maximum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ max_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed unsigned 16-bit integers in a by maximum using mask k. Returns the maximum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ min_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 8-bit integers in a by minimum using mask k. Returns the minimum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ min_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 16-bit integers in a by minimum using mask k. Returns the minimum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ min_ epu8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed unsigned 8-bit integers in a by minimum using mask k. Returns the minimum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ min_ epu16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed unsigned 16-bit integers in a by minimum using mask k. Returns the minimum of all active elements in a.
- _mm256_
mask_ ⚠reduce_ mul_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 8-bit integers in a by multiplication using mask k. Returns the product of all active elements in a.
- _mm256_
mask_ ⚠reduce_ mul_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 16-bit integers in a by multiplication using mask k. Returns the product of all active elements in a.
- _mm256_
mask_ ⚠reduce_ or_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 8-bit integers in a by bitwise OR using mask k. Returns the bitwise OR of all active elements in a.
- _mm256_
mask_ ⚠reduce_ or_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Reduce the packed 16-bit integers in a by bitwise OR using mask k. Returns the bitwise OR of all active elements in a.
- _mm256_
mask_ ⚠reduce_ pd Experimental (x86 or x86-64) and avx512dq,avx512vl
- Extract the reduced argument of packed double-precision (64-bit) floating-point elements in a by the number of bits specified by imm8, and store the results in dst using writemask k (elements are copied from src to dst if the corresponding mask bit is not set). Rounding is done according to the imm8 parameter, which can be one of:
- _mm256_
mask_ ⚠reduce_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Extract the reduced argument of packed half-precision (16-bit) floating-point elements in a by the number of bits specified by imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠reduce_ ps Experimental (x86 or x86-64) and avx512dq,avx512vl
- Extract the reduced argument of packed single-precision (32-bit) floating-point elements in a by the number of bits specified by imm8, and store the results in dst using writemask k (elements are copied from src to dst if the corresponding mask bit is not set). Rounding is done according to the imm8 parameter, which can be one of:
- _mm256_
mask_ ⚠rol_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠rol_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠rolv_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠rolv_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠ror_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠ror_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠rorv_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠rorv_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠roundscale_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Round packed double-precision (64-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:\ - _mm256_
mask_ ⚠roundscale_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Round packed half-precision (16-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠roundscale_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Round packed single-precision (32-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:\ - _mm256_
mask_ ⚠rsqrt14_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
- _mm256_
mask_ ⚠rsqrt14_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
- _mm256_
mask_ ⚠rsqrt_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Compute the approximate reciprocal square root of packed half-precision (16-bit) floating-point
elements in a, and store the results in dst using writemask k (elements are copied from src when
the corresponding mask bit is not set).
The maximum relative error for this approximation is less than
1.5*2^-12
. - _mm256_
mask_ ⚠scalef_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Scale the packed double-precision (64-bit) floating-point elements in a using values from b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠scalef_ ph Experimental (x86 or x86-64) and avx512fp16,avx512vl
- Scale the packed half-precision (16-bit) floating-point elements in a using values from b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠scalef_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Scale the packed single-precision (32-bit) floating-point elements in a using values from b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠set1_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Broadcast 8-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠set1_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Broadcast 16-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠set1_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast 32-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠set1_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Broadcast 64-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shldi_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by imm8 bits, and store the upper 16-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shldi_ epi32 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by imm8 bits, and store the upper 32-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shldi_ epi64 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by imm8 bits, and store the upper 64-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shldv_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 16-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shldv_ epi32 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 32-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shldv_ epi64 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 64-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shrdi_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by imm8 bits, and store the lower 16-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shrdi_ epi32 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by imm8 bits, and store the lower 32-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shrdi_ epi64 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by imm8 bits, and store the lower 64-bits in dst using writemask k (elements are copied from src“ when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shrdv_ epi16 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 16-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shrdv_ epi32 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 32-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shrdv_ epi64 Experimental (x86 or x86-64) and avx512vbmi2,avx512vl
- Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 64-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ epi8 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shuffle 8-bit integers in a within 128-bit lanes using the control in the corresponding 8-bit element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ f32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ f64x2 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ i32x4 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ i64x2 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ pd Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shuffle_ ps Experimental (x86 or x86-64) and avx512f,avx512vl
- Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shufflehi_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shuffle 16-bit integers in the high 64 bits of 128-bit lanes of a using the control in imm8. Store the results in the high 64 bits of 128-bit lanes of dst, with the low 64 bits of 128-bit lanes being copied from a to dst, using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠shufflelo_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shuffle 16-bit integers in the low 64 bits of 128-bit lanes of a using the control in imm8. Store the results in the low 64 bits of 128-bit lanes of dst, with the high 64 bits of 128-bit lanes being copied from a to dst, using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠sll_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shift packed 16-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠sll_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠sll_ epi64 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠slli_ epi16 Experimental (x86 or x86-64) and avx512bw,avx512vl
- Shift packed 16-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠slli_ epi32 Experimental (x86 or x86-64) and avx512f,avx512vl
- Shift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
- _mm256_
mask_ ⚠slli_ epi64 Experimental