1.27.0[][src]Module core::arch::x86_64

This is supported on x86-64 only.

Platform-specific intrinsics for the x86_64 platform.

See the module documentation for more details.

Structs

CpuidResultx86-64

Result of the cpuid instruction.

__m128x86-64

128-bit wide set of four f32 types, x86-specific

__m128dx86-64

128-bit wide set of two f64 types, x86-specific

__m128ix86-64

128-bit wide integer vector type, x86-specific

__m256x86-64

256-bit wide set of eight f32 types, x86-specific

__m256dx86-64

256-bit wide set of four f64 types, x86-specific

__m256ix86-64

256-bit wide integer vector type, x86-specific

__m512Experimentalx86-64

512-bit wide set of sixteen f32 types, x86-specific

__m512dExperimentalx86-64

512-bit wide set of eight f64 types, x86-specific

__m512iExperimentalx86-64

512-bit wide integer vector type, x86-specific

Constants

_CMP_EQ_OQx86-64

Equal (ordered, non-signaling)

_CMP_EQ_OSx86-64

Equal (ordered, signaling)

_CMP_EQ_UQx86-64

Equal (unordered, non-signaling)

_CMP_EQ_USx86-64

Equal (unordered, signaling)

_CMP_FALSE_OQx86-64

False (ordered, non-signaling)

_CMP_FALSE_OSx86-64

False (ordered, signaling)

_CMP_GE_OQx86-64

Greater-than-or-equal (ordered, non-signaling)

_CMP_GE_OSx86-64

Greater-than-or-equal (ordered, signaling)

_CMP_GT_OQx86-64

Greater-than (ordered, non-signaling)

_CMP_GT_OSx86-64

Greater-than (ordered, signaling)

_CMP_LE_OQx86-64

Less-than-or-equal (ordered, non-signaling)

_CMP_LE_OSx86-64

Less-than-or-equal (ordered, signaling)

_CMP_LT_OQx86-64

Less-than (ordered, non-signaling)

_CMP_LT_OSx86-64

Less-than (ordered, signaling)

_CMP_NEQ_OQx86-64

Not-equal (ordered, non-signaling)

_CMP_NEQ_OSx86-64

Not-equal (ordered, signaling)

_CMP_NEQ_UQx86-64

Not-equal (unordered, non-signaling)

_CMP_NEQ_USx86-64

Not-equal (unordered, signaling)

_CMP_NGE_UQx86-64

Not-greater-than-or-equal (unordered, non-signaling)

_CMP_NGE_USx86-64

Not-greater-than-or-equal (unordered, signaling)

_CMP_NGT_UQx86-64

Not-greater-than (unordered, non-signaling)

_CMP_NGT_USx86-64

Not-greater-than (unordered, signaling)

_CMP_NLE_UQx86-64

Not-less-than-or-equal (unordered, non-signaling)

_CMP_NLE_USx86-64

Not-less-than-or-equal (unordered, signaling)

_CMP_NLT_UQx86-64

Not-less-than (unordered, non-signaling)

_CMP_NLT_USx86-64

Not-less-than (unordered, signaling)

_CMP_ORD_Qx86-64

Ordered (non-signaling)

_CMP_ORD_Sx86-64

Ordered (signaling)

_CMP_TRUE_UQx86-64

True (unordered, non-signaling)

_CMP_TRUE_USx86-64

True (unordered, signaling)

_CMP_UNORD_Qx86-64

Unordered (non-signaling)

_CMP_UNORD_Sx86-64

Unordered (signaling)

_MM_EXCEPT_DENORMx86-64

See _mm_setcsr

_MM_EXCEPT_DIV_ZEROx86-64

See _mm_setcsr

_MM_EXCEPT_INEXACTx86-64

See _mm_setcsr

_MM_EXCEPT_INVALIDx86-64

See _mm_setcsr

_MM_EXCEPT_MASKx86-64

See _MM_GET_EXCEPTION_STATE

_MM_EXCEPT_OVERFLOWx86-64

See _mm_setcsr

_MM_EXCEPT_UNDERFLOWx86-64

See _mm_setcsr

_MM_FLUSH_ZERO_MASKx86-64

See _MM_GET_FLUSH_ZERO_MODE

_MM_FLUSH_ZERO_OFFx86-64

See _mm_setcsr

_MM_FLUSH_ZERO_ONx86-64

See _mm_setcsr

_MM_FROUND_CEILx86-64

round up and do not suppress exceptions

_MM_FROUND_CUR_DIRECTIONx86-64

use MXCSR.RC; see vendor::_MM_SET_ROUNDING_MODE

_MM_FROUND_FLOORx86-64

round down and do not suppress exceptions

_MM_FROUND_NEARBYINTx86-64

use MXCSR.RC and suppress exceptions; see vendor::_MM_SET_ROUNDING_MODE

_MM_FROUND_NINTx86-64

round to nearest and do not suppress exceptions

_MM_FROUND_NO_EXCx86-64

suppress exceptions

_MM_FROUND_RAISE_EXCx86-64

do not suppress exceptions

_MM_FROUND_RINTx86-64

use MXCSR.RC and do not suppress exceptions; see vendor::_MM_SET_ROUNDING_MODE

_MM_FROUND_TO_NEAREST_INTx86-64

round to nearest

_MM_FROUND_TO_NEG_INFx86-64

round down

_MM_FROUND_TO_POS_INFx86-64

round up

_MM_FROUND_TO_ZEROx86-64

truncate

_MM_FROUND_TRUNCx86-64

truncate and do not suppress exceptions

_MM_HINT_NTAx86-64

See _mm_prefetch.

_MM_HINT_T0x86-64

See _mm_prefetch.

_MM_HINT_T1x86-64

See _mm_prefetch.

_MM_HINT_T2x86-64

See _mm_prefetch.

_MM_MASK_DENORMx86-64

See _mm_setcsr

_MM_MASK_DIV_ZEROx86-64

See _mm_setcsr

_MM_MASK_INEXACTx86-64

See _mm_setcsr

_MM_MASK_INVALIDx86-64

See _mm_setcsr

_MM_MASK_MASKx86-64

See _MM_GET_EXCEPTION_MASK

_MM_MASK_OVERFLOWx86-64

See _mm_setcsr

_MM_MASK_UNDERFLOWx86-64

See _mm_setcsr

_MM_ROUND_DOWNx86-64

See _mm_setcsr

_MM_ROUND_MASKx86-64

See _MM_GET_ROUNDING_MODE

_MM_ROUND_NEARESTx86-64

See _mm_setcsr

_MM_ROUND_TOWARD_ZEROx86-64

See _mm_setcsr

_MM_ROUND_UPx86-64

See _mm_setcsr

_SIDD_BIT_MASKx86-64

Mask only: return the bit mask

_SIDD_CMP_EQUAL_ANYx86-64

For each character in a, find if it is in b (Default)

_SIDD_CMP_EQUAL_EACHx86-64

The strings defined by a and b are equal

_SIDD_CMP_EQUAL_ORDEREDx86-64

Search for the defined substring in the target

_SIDD_CMP_RANGESx86-64

For each character in a, determine if b[0] <= c <= b[1] or b[1] <= c <= b[2]...

_SIDD_LEAST_SIGNIFICANTx86-64

Index only: return the least significant bit (Default)

_SIDD_MASKED_NEGATIVE_POLARITYx86-64

Negates results only before the end of the string

_SIDD_MASKED_POSITIVE_POLARITYx86-64

Do not negate results before the end of the string

_SIDD_MOST_SIGNIFICANTx86-64

Index only: return the most significant bit

_SIDD_NEGATIVE_POLARITYx86-64

Negates results

_SIDD_POSITIVE_POLARITYx86-64

Do not negate results (Default)

_SIDD_SBYTE_OPSx86-64

String contains signed 8-bit characters

_SIDD_SWORD_OPSx86-64

String contains unsigned 16-bit characters

_SIDD_UBYTE_OPSx86-64

String contains unsigned 8-bit characters (Default)

_SIDD_UNIT_MASKx86-64

Mask only: return the byte mask

_SIDD_UWORD_OPSx86-64

String contains unsigned 16-bit characters

_XCR_XFEATURE_ENABLED_MASKx86-64

XFEATURE_ENABLED_MASK for XCR

_MM_CMPINT_EQExperimentalx86-64

Equal

_MM_CMPINT_FALSEExperimentalx86-64

False

_MM_CMPINT_LEExperimentalx86-64

Less-than-or-equal

_MM_CMPINT_LTExperimentalx86-64

Less-than

_MM_CMPINT_NEExperimentalx86-64

Not-equal

_MM_CMPINT_NLEExperimentalx86-64

Not less-than-or-equal

_MM_CMPINT_NLTExperimentalx86-64

Not less-than

_MM_CMPINT_TRUEExperimentalx86-64

True

_MM_MANT_NORM_1_2Experimentalx86-64

interval [1, 2)

_MM_MANT_NORM_P5_1Experimentalx86-64

interval [0.5, 1)

_MM_MANT_NORM_P5_2Experimentalx86-64

interval [0.5, 2)

_MM_MANT_NORM_P75_1P5Experimentalx86-64

interval [0.75, 1.5)

_MM_MANT_SIGN_NANExperimentalx86-64

DEST = NaN if sign(SRC) = 1

_MM_MANT_SIGN_SRCExperimentalx86-64

sign = sign(SRC)

_MM_MANT_SIGN_ZEROExperimentalx86-64

sign = 0

_MM_PERM_AAAAExperimentalx86-64
_MM_PERM_AAABExperimentalx86-64
_MM_PERM_AAACExperimentalx86-64
_MM_PERM_AAADExperimentalx86-64
_MM_PERM_AABAExperimentalx86-64
_MM_PERM_AABBExperimentalx86-64
_MM_PERM_AABCExperimentalx86-64
_MM_PERM_AABDExperimentalx86-64
_MM_PERM_AACAExperimentalx86-64
_MM_PERM_AACBExperimentalx86-64
_MM_PERM_AACCExperimentalx86-64
_MM_PERM_AACDExperimentalx86-64
_MM_PERM_AADAExperimentalx86-64
_MM_PERM_AADBExperimentalx86-64
_MM_PERM_AADCExperimentalx86-64
_MM_PERM_AADDExperimentalx86-64
_MM_PERM_ABAAExperimentalx86-64
_MM_PERM_ABABExperimentalx86-64
_MM_PERM_ABACExperimentalx86-64
_MM_PERM_ABADExperimentalx86-64
_MM_PERM_ABBAExperimentalx86-64
_MM_PERM_ABBBExperimentalx86-64
_MM_PERM_ABBCExperimentalx86-64
_MM_PERM_ABBDExperimentalx86-64
_MM_PERM_ABCAExperimentalx86-64
_MM_PERM_ABCBExperimentalx86-64
_MM_PERM_ABCCExperimentalx86-64
_MM_PERM_ABCDExperimentalx86-64
_MM_PERM_ABDAExperimentalx86-64
_MM_PERM_ABDBExperimentalx86-64
_MM_PERM_ABDCExperimentalx86-64
_MM_PERM_ABDDExperimentalx86-64
_MM_PERM_ACAAExperimentalx86-64
_MM_PERM_ACABExperimentalx86-64
_MM_PERM_ACACExperimentalx86-64
_MM_PERM_ACADExperimentalx86-64
_MM_PERM_ACBAExperimentalx86-64
_MM_PERM_ACBBExperimentalx86-64
_MM_PERM_ACBCExperimentalx86-64
_MM_PERM_ACBDExperimentalx86-64
_MM_PERM_ACCAExperimentalx86-64
_MM_PERM_ACCBExperimentalx86-64
_MM_PERM_ACCCExperimentalx86-64
_MM_PERM_ACCDExperimentalx86-64
_MM_PERM_ACDAExperimentalx86-64
_MM_PERM_ACDBExperimentalx86-64
_MM_PERM_ACDCExperimentalx86-64
_MM_PERM_ACDDExperimentalx86-64
_MM_PERM_ADAAExperimentalx86-64
_MM_PERM_ADABExperimentalx86-64
_MM_PERM_ADACExperimentalx86-64
_MM_PERM_ADADExperimentalx86-64
_MM_PERM_ADBAExperimentalx86-64
_MM_PERM_ADBBExperimentalx86-64
_MM_PERM_ADBCExperimentalx86-64
_MM_PERM_ADBDExperimentalx86-64
_MM_PERM_ADCAExperimentalx86-64
_MM_PERM_ADCBExperimentalx86-64
_MM_PERM_ADCCExperimentalx86-64
_MM_PERM_ADCDExperimentalx86-64
_MM_PERM_ADDAExperimentalx86-64
_MM_PERM_ADDBExperimentalx86-64
_MM_PERM_ADDCExperimentalx86-64
_MM_PERM_ADDDExperimentalx86-64
_MM_PERM_BAAAExperimentalx86-64
_MM_PERM_BAABExperimentalx86-64
_MM_PERM_BAACExperimentalx86-64
_MM_PERM_BAADExperimentalx86-64
_MM_PERM_BABAExperimentalx86-64
_MM_PERM_BABBExperimentalx86-64
_MM_PERM_BABCExperimentalx86-64
_MM_PERM_BABDExperimentalx86-64
_MM_PERM_BACAExperimentalx86-64
_MM_PERM_BACBExperimentalx86-64
_MM_PERM_BACCExperimentalx86-64
_MM_PERM_BACDExperimentalx86-64
_MM_PERM_BADAExperimentalx86-64
_MM_PERM_BADBExperimentalx86-64
_MM_PERM_BADCExperimentalx86-64
_MM_PERM_BADDExperimentalx86-64
_MM_PERM_BBAAExperimentalx86-64
_MM_PERM_BBABExperimentalx86-64
_MM_PERM_BBACExperimentalx86-64
_MM_PERM_BBADExperimentalx86-64
_MM_PERM_BBBAExperimentalx86-64
_MM_PERM_BBBBExperimentalx86-64
_MM_PERM_BBBCExperimentalx86-64
_MM_PERM_BBBDExperimentalx86-64
_MM_PERM_BBCAExperimentalx86-64
_MM_PERM_BBCBExperimentalx86-64
_MM_PERM_BBCCExperimentalx86-64
_MM_PERM_BBCDExperimentalx86-64
_MM_PERM_BBDAExperimentalx86-64
_MM_PERM_BBDBExperimentalx86-64
_MM_PERM_BBDCExperimentalx86-64
_MM_PERM_BBDDExperimentalx86-64
_MM_PERM_BCAAExperimentalx86-64
_MM_PERM_BCABExperimentalx86-64
_MM_PERM_BCACExperimentalx86-64
_MM_PERM_BCADExperimentalx86-64
_MM_PERM_BCBAExperimentalx86-64
_MM_PERM_BCBBExperimentalx86-64
_MM_PERM_BCBCExperimentalx86-64
_MM_PERM_BCBDExperimentalx86-64
_MM_PERM_BCCAExperimentalx86-64
_MM_PERM_BCCBExperimentalx86-64
_MM_PERM_BCCCExperimentalx86-64
_MM_PERM_BCCDExperimentalx86-64
_MM_PERM_BCDAExperimentalx86-64
_MM_PERM_BCDBExperimentalx86-64
_MM_PERM_BCDCExperimentalx86-64
_MM_PERM_BCDDExperimentalx86-64
_MM_PERM_BDAAExperimentalx86-64
_MM_PERM_BDABExperimentalx86-64
_MM_PERM_BDACExperimentalx86-64
_MM_PERM_BDADExperimentalx86-64
_MM_PERM_BDBAExperimentalx86-64
_MM_PERM_BDBBExperimentalx86-64
_MM_PERM_BDBCExperimentalx86-64
_MM_PERM_BDBDExperimentalx86-64
_MM_PERM_BDCAExperimentalx86-64
_MM_PERM_BDCBExperimentalx86-64
_MM_PERM_BDCCExperimentalx86-64
_MM_PERM_BDCDExperimentalx86-64
_MM_PERM_BDDAExperimentalx86-64
_MM_PERM_BDDBExperimentalx86-64
_MM_PERM_BDDCExperimentalx86-64
_MM_PERM_BDDDExperimentalx86-64
_MM_PERM_CAAAExperimentalx86-64
_MM_PERM_CAABExperimentalx86-64
_MM_PERM_CAACExperimentalx86-64
_MM_PERM_CAADExperimentalx86-64
_MM_PERM_CABAExperimentalx86-64
_MM_PERM_CABBExperimentalx86-64
_MM_PERM_CABCExperimentalx86-64
_MM_PERM_CABDExperimentalx86-64
_MM_PERM_CACAExperimentalx86-64
_MM_PERM_CACBExperimentalx86-64
_MM_PERM_CACCExperimentalx86-64
_MM_PERM_CACDExperimentalx86-64
_MM_PERM_CADAExperimentalx86-64
_MM_PERM_CADBExperimentalx86-64
_MM_PERM_CADCExperimentalx86-64
_MM_PERM_CADDExperimentalx86-64
_MM_PERM_CBAAExperimentalx86-64
_MM_PERM_CBABExperimentalx86-64
_MM_PERM_CBACExperimentalx86-64
_MM_PERM_CBADExperimentalx86-64
_MM_PERM_CBBAExperimentalx86-64
_MM_PERM_CBBBExperimentalx86-64
_MM_PERM_CBBCExperimentalx86-64
_MM_PERM_CBBDExperimentalx86-64
_MM_PERM_CBCAExperimentalx86-64
_MM_PERM_CBCBExperimentalx86-64
_MM_PERM_CBCCExperimentalx86-64
_MM_PERM_CBCDExperimentalx86-64
_MM_PERM_CBDAExperimentalx86-64
_MM_PERM_CBDBExperimentalx86-64
_MM_PERM_CBDCExperimentalx86-64
_MM_PERM_CBDDExperimentalx86-64
_MM_PERM_CCAAExperimentalx86-64
_MM_PERM_CCABExperimentalx86-64
_MM_PERM_CCACExperimentalx86-64
_MM_PERM_CCADExperimentalx86-64
_MM_PERM_CCBAExperimentalx86-64
_MM_PERM_CCBBExperimentalx86-64
_MM_PERM_CCBCExperimentalx86-64
_MM_PERM_CCBDExperimentalx86-64
_MM_PERM_CCCAExperimentalx86-64
_MM_PERM_CCCBExperimentalx86-64
_MM_PERM_CCCCExperimentalx86-64
_MM_PERM_CCCDExperimentalx86-64
_MM_PERM_CCDAExperimentalx86-64
_MM_PERM_CCDBExperimentalx86-64
_MM_PERM_CCDCExperimentalx86-64
_MM_PERM_CCDDExperimentalx86-64
_MM_PERM_CDAAExperimentalx86-64
_MM_PERM_CDABExperimentalx86-64
_MM_PERM_CDACExperimentalx86-64
_MM_PERM_CDADExperimentalx86-64
_MM_PERM_CDBAExperimentalx86-64
_MM_PERM_CDBBExperimentalx86-64
_MM_PERM_CDBCExperimentalx86-64
_MM_PERM_CDBDExperimentalx86-64
_MM_PERM_CDCAExperimentalx86-64
_MM_PERM_CDCBExperimentalx86-64
_MM_PERM_CDCCExperimentalx86-64
_MM_PERM_CDCDExperimentalx86-64
_MM_PERM_CDDAExperimentalx86-64
_MM_PERM_CDDBExperimentalx86-64
_MM_PERM_CDDCExperimentalx86-64
_MM_PERM_CDDDExperimentalx86-64
_MM_PERM_DAAAExperimentalx86-64
_MM_PERM_DAABExperimentalx86-64
_MM_PERM_DAACExperimentalx86-64
_MM_PERM_DAADExperimentalx86-64
_MM_PERM_DABAExperimentalx86-64
_MM_PERM_DABBExperimentalx86-64
_MM_PERM_DABCExperimentalx86-64
_MM_PERM_DABDExperimentalx86-64
_MM_PERM_DACAExperimentalx86-64
_MM_PERM_DACBExperimentalx86-64
_MM_PERM_DACCExperimentalx86-64
_MM_PERM_DACDExperimentalx86-64
_MM_PERM_DADAExperimentalx86-64
_MM_PERM_DADBExperimentalx86-64
_MM_PERM_DADCExperimentalx86-64
_MM_PERM_DADDExperimentalx86-64
_MM_PERM_DBAAExperimentalx86-64
_MM_PERM_DBABExperimentalx86-64
_MM_PERM_DBACExperimentalx86-64
_MM_PERM_DBADExperimentalx86-64
_MM_PERM_DBBAExperimentalx86-64
_MM_PERM_DBBBExperimentalx86-64
_MM_PERM_DBBCExperimentalx86-64
_MM_PERM_DBBDExperimentalx86-64
_MM_PERM_DBCAExperimentalx86-64
_MM_PERM_DBCBExperimentalx86-64
_MM_PERM_DBCCExperimentalx86-64
_MM_PERM_DBCDExperimentalx86-64
_MM_PERM_DBDAExperimentalx86-64
_MM_PERM_DBDBExperimentalx86-64
_MM_PERM_DBDCExperimentalx86-64
_MM_PERM_DBDDExperimentalx86-64
_MM_PERM_DCAAExperimentalx86-64
_MM_PERM_DCABExperimentalx86-64
_MM_PERM_DCACExperimentalx86-64
_MM_PERM_DCADExperimentalx86-64
_MM_PERM_DCBAExperimentalx86-64
_MM_PERM_DCBBExperimentalx86-64
_MM_PERM_DCBCExperimentalx86-64
_MM_PERM_DCBDExperimentalx86-64
_MM_PERM_DCCAExperimentalx86-64
_MM_PERM_DCCBExperimentalx86-64
_MM_PERM_DCCCExperimentalx86-64
_MM_PERM_DCCDExperimentalx86-64
_MM_PERM_DCDAExperimentalx86-64
_MM_PERM_DCDBExperimentalx86-64
_MM_PERM_DCDCExperimentalx86-64
_MM_PERM_DCDDExperimentalx86-64
_MM_PERM_DDAAExperimentalx86-64
_MM_PERM_DDABExperimentalx86-64
_MM_PERM_DDACExperimentalx86-64
_MM_PERM_DDADExperimentalx86-64
_MM_PERM_DDBAExperimentalx86-64
_MM_PERM_DDBBExperimentalx86-64
_MM_PERM_DDBCExperimentalx86-64
_MM_PERM_DDBDExperimentalx86-64
_MM_PERM_DDCAExperimentalx86-64
_MM_PERM_DDCBExperimentalx86-64
_MM_PERM_DDCCExperimentalx86-64
_MM_PERM_DDCDExperimentalx86-64
_MM_PERM_DDDAExperimentalx86-64
_MM_PERM_DDDBExperimentalx86-64
_MM_PERM_DDDCExperimentalx86-64
_MM_PERM_DDDDExperimentalx86-64
_XABORT_CAPACITYExperimentalx86-64

Transaction abort due to the transaction using too much memory.

_XABORT_CONFLICTExperimentalx86-64

Transaction abort due to a memory conflict with another thread.

_XABORT_DEBUGExperimentalx86-64

Transaction abort due to a debug trap.

_XABORT_EXPLICITExperimentalx86-64

Transaction explicitly aborted with xabort. The parameter passed to xabort is available with _xabort_code(status).

_XABORT_NESTEDExperimentalx86-64

Transaction abort in a inner nested transaction.

_XABORT_RETRYExperimentalx86-64

Transaction retry is possible.

_XBEGIN_STARTEDExperimentalx86-64

Transaction successfully started.

Functions

_MM_GET_EXCEPTION_MASKx86-64 and sse

See _mm_setcsr

_MM_GET_EXCEPTION_STATEx86-64 and sse

See _mm_setcsr

_MM_GET_FLUSH_ZERO_MODEx86-64 and sse

See _mm_setcsr

_MM_GET_ROUNDING_MODEx86-64 and sse

See _mm_setcsr

_MM_SET_EXCEPTION_MASKx86-64 and sse

See _mm_setcsr

_MM_SET_EXCEPTION_STATEx86-64 and sse

See _mm_setcsr

_MM_SET_FLUSH_ZERO_MODEx86-64 and sse

See _mm_setcsr

_MM_SET_ROUNDING_MODEx86-64 and sse

See _mm_setcsr

_MM_TRANSPOSE4_PSx86-64 and sse

Transpose the 4x4 matrix formed by 4 rows of __m128 in place.

__cpuidx86-64

See __cpuid_count.

__cpuid_countx86-64

Returns the result of the cpuid instruction for a given leaf (EAX) and sub_leaf (ECX).

__get_cpuid_maxx86-64

Returns the highest-supported leaf (EAX) and sub-leaf (ECX) cpuid values.

__rdtscpx86-64

Reads the current value of the processor’s time-stamp counter and the IA32_TSC_AUX MSR.

_addcarry_u32x86-64

Adds unsigned 32-bit integers a and b with unsigned 8-bit carry-in c_in (carry flag), and store the unsigned 32-bit result in out, and the carry-out is returned (carry or overflow flag).

_addcarry_u64x86-64

Adds unsigned 64-bit integers a and b with unsigned 8-bit carry-in c_in (carry flag), and store the unsigned 64-bit result in out, and the carry-out is returned (carry or overflow flag).

_addcarryx_u32x86-64 and adx

Adds unsigned 32-bit integers a and b with unsigned 8-bit carry-in c_in (carry or overflow flag), and store the unsigned 32-bit result in out, and the carry-out is returned (carry or overflow flag).

_addcarryx_u64x86-64 and adx

Adds unsigned 64-bit integers a and b with unsigned 8-bit carry-in c_in (carry or overflow flag), and store the unsigned 64-bit result in out, and the carry-out is returned (carry or overflow flag).

_andn_u32x86-64 and bmi1

Bitwise logical AND of inverted a with b.

_andn_u64x86-64 and bmi1

Bitwise logical AND of inverted a with b.

_bextr2_u32x86-64 and bmi1

Extracts bits of a specified by control into the least significant bits of the result.

_bextr2_u64x86-64 and bmi1

Extracts bits of a specified by control into the least significant bits of the result.

_bextr_u32x86-64 and bmi1

Extracts bits in range [start, start + length) from a into the least significant bits of the result.

_bextr_u64x86-64 and bmi1

Extracts bits in range [start, start + length) from a into the least significant bits of the result.

_blcfill_u32x86-64 and tbm

Clears all bits below the least significant zero bit of x.

_blcfill_u64x86-64 and tbm

Clears all bits below the least significant zero bit of x.

_blci_u32x86-64 and tbm

Sets all bits of x to 1 except for the least significant zero bit.

_blci_u64x86-64 and tbm

Sets all bits of x to 1 except for the least significant zero bit.

_blcic_u32x86-64 and tbm

Sets the least significant zero bit of x and clears all other bits.

_blcic_u64x86-64 and tbm

Sets the least significant zero bit of x and clears all other bits.

_blcmsk_u32x86-64 and tbm

Sets the least significant zero bit of x and clears all bits above that bit.

_blcmsk_u64x86-64 and tbm

Sets the least significant zero bit of x and clears all bits above that bit.

_blcs_u32x86-64 and tbm

Sets the least significant zero bit of x.

_blcs_u64x86-64 and tbm

Sets the least significant zero bit of x.

_blsfill_u32x86-64 and tbm

Sets all bits of x below the least significant one.

_blsfill_u64x86-64 and tbm

Sets all bits of x below the least significant one.

_blsi_u32x86-64 and bmi1

Extracts lowest set isolated bit.

_blsi_u64x86-64 and bmi1

Extracts lowest set isolated bit.

_blsic_u32x86-64 and tbm

Clears least significant bit and sets all other bits.

_blsic_u64x86-64 and tbm

Clears least significant bit and sets all other bits.

_blsmsk_u32x86-64 and bmi1

Gets mask up to lowest set bit.

_blsmsk_u64x86-64 and bmi1

Gets mask up to lowest set bit.

_blsr_u32x86-64 and bmi1

Resets the lowest set bit of x.

_blsr_u64x86-64 and bmi1

Resets the lowest set bit of x.

_bswapx86-64

Returns an integer with the reversed byte order of x

_bswap64x86-64

Returns an integer with the reversed byte order of x

_bzhi_u32x86-64 and bmi2

Zeroes higher bits of a >= index.

_bzhi_u64x86-64 and bmi2

Zeroes higher bits of a >= index.

_fxrstorx86-64 and fxsr

Restores the XMM, MMX, MXCSR, and x87 FPU registers from the 512-byte-long 16-byte-aligned memory region mem_addr.

_fxrstor64x86-64 and fxsr

Restores the XMM, MMX, MXCSR, and x87 FPU registers from the 512-byte-long 16-byte-aligned memory region mem_addr.

_fxsavex86-64 and fxsr

Saves the x87 FPU, MMX technology, XMM, and MXCSR registers to the 512-byte-long 16-byte-aligned memory region mem_addr.

_fxsave64x86-64 and fxsr

Saves the x87 FPU, MMX technology, XMM, and MXCSR registers to the 512-byte-long 16-byte-aligned memory region mem_addr.

_lzcnt_u32x86-64 and lzcnt

Counts the leading most significant zero bits.

_lzcnt_u64x86-64 and lzcnt

Counts the leading most significant zero bits.

_mm256_abs_epi8x86-64 and avx2

Computes the absolute values of packed 8-bit integers in a.

_mm256_abs_epi16x86-64 and avx2

Computes the absolute values of packed 16-bit integers in a.

_mm256_abs_epi32x86-64 and avx2

Computes the absolute values of packed 32-bit integers in a.

_mm256_add_epi8x86-64 and avx2

Adds packed 8-bit integers in a and b.

_mm256_add_epi16x86-64 and avx2

Adds packed 16-bit integers in a and b.

_mm256_add_epi32x86-64 and avx2

Adds packed 32-bit integers in a and b.

_mm256_add_epi64x86-64 and avx2

Adds packed 64-bit integers in a and b.

_mm256_add_pdx86-64 and avx

Adds packed double-precision (64-bit) floating-point elements in a and b.

_mm256_add_psx86-64 and avx

Adds packed single-precision (32-bit) floating-point elements in a and b.

_mm256_adds_epi8x86-64 and avx2

Adds packed 8-bit integers in a and b using saturation.

_mm256_adds_epi16x86-64 and avx2

Adds packed 16-bit integers in a and b using saturation.

_mm256_adds_epu8x86-64 and avx2

Adds packed unsigned 8-bit integers in a and b using saturation.

_mm256_adds_epu16x86-64 and avx2

Adds packed unsigned 16-bit integers in a and b using saturation.

_mm256_addsub_pdx86-64 and avx

Alternatively adds and subtracts packed double-precision (64-bit) floating-point elements in a to/from packed elements in b.

_mm256_addsub_psx86-64 and avx

Alternatively adds and subtracts packed single-precision (32-bit) floating-point elements in a to/from packed elements in b.

_mm256_alignr_epi8x86-64 and avx2

Concatenates pairs of 16-byte blocks in a and b into a 32-byte temporary result, shifts the result right by n bytes, and returns the low 16 bytes.

_mm256_and_pdx86-64 and avx

Computes the bitwise AND of a packed double-precision (64-bit) floating-point elements in a and b.

_mm256_and_psx86-64 and avx

Computes the bitwise AND of packed single-precision (32-bit) floating-point elements in a and b.

_mm256_and_si256x86-64 and avx2

Computes the bitwise AND of 256 bits (representing integer data) in a and b.

_mm256_andnot_pdx86-64 and avx

Computes the bitwise NOT of packed double-precision (64-bit) floating-point elements in a, and then AND with b.

_mm256_andnot_psx86-64 and avx

Computes the bitwise NOT of packed single-precision (32-bit) floating-point elements in a and then AND with b.

_mm256_andnot_si256x86-64 and avx2

Computes the bitwise NOT of 256 bits (representing integer data) in a and then AND with b.

_mm256_avg_epu8x86-64 and avx2

Averages packed unsigned 8-bit integers in a and b.

_mm256_avg_epu16x86-64 and avx2

Averages packed unsigned 16-bit integers in a and b.

_mm256_blend_epi16x86-64 and avx2

Blends packed 16-bit integers from a and b using control mask imm8.

_mm256_blend_epi32x86-64 and avx2

Blends packed 32-bit integers from a and b using control mask imm8.

_mm256_blend_pdx86-64 and avx

Blends packed double-precision (64-bit) floating-point elements from a and b using control mask imm8.

_mm256_blend_psx86-64 and avx

Blends packed single-precision (32-bit) floating-point elements from a and b using control mask imm8.

_mm256_blendv_epi8x86-64 and avx2

Blends packed 8-bit integers from a and b using mask.

_mm256_blendv_pdx86-64 and avx

Blends packed double-precision (64-bit) floating-point elements from a and b using c as a mask.

_mm256_blendv_psx86-64 and avx

Blends packed single-precision (32-bit) floating-point elements from a and b using c as a mask.

_mm256_broadcast_pdx86-64 and avx

Broadcasts 128 bits from memory (composed of 2 packed double-precision (64-bit) floating-point elements) to all elements of the returned vector.

_mm256_broadcast_psx86-64 and avx

Broadcasts 128 bits from memory (composed of 4 packed single-precision (32-bit) floating-point elements) to all elements of the returned vector.

_mm256_broadcast_sdx86-64 and avx

Broadcasts a double-precision (64-bit) floating-point element from memory to all elements of the returned vector.

_mm256_broadcast_ssx86-64 and avx

Broadcasts a single-precision (32-bit) floating-point element from memory to all elements of the returned vector.

_mm256_broadcastb_epi8x86-64 and avx2

Broadcasts the low packed 8-bit integer from a to all elements of the 256-bit returned value.

_mm256_broadcastd_epi32x86-64 and avx2

Broadcasts the low packed 32-bit integer from a to all elements of the 256-bit returned value.

_mm256_broadcastq_epi64x86-64 and avx2

Broadcasts the low packed 64-bit integer from a to all elements of the 256-bit returned value.

_mm256_broadcastsd_pdx86-64 and avx2

Broadcasts the low double-precision (64-bit) floating-point element from a to all elements of the 256-bit returned value.

_mm256_broadcastsi128_si256x86-64 and avx2

Broadcasts 128 bits of integer data from a to all 128-bit lanes in the 256-bit returned value.

_mm256_broadcastss_psx86-64 and avx2

Broadcasts the low single-precision (32-bit) floating-point element from a to all elements of the 256-bit returned value.

_mm256_broadcastw_epi16x86-64 and avx2

Broadcasts the low packed 16-bit integer from a to all elements of the 256-bit returned value

_mm256_bslli_epi128x86-64 and avx2

Shifts 128-bit lanes in a left by imm8 bytes while shifting in zeros.

_mm256_bsrli_epi128x86-64 and avx2

Shifts 128-bit lanes in a right by imm8 bytes while shifting in zeros.

_mm256_castpd128_pd256x86-64 and avx

Casts vector of type __m128d to type __m256d; the upper 128 bits of the result are undefined.

_mm256_castpd256_pd128x86-64 and avx

Casts vector of type __m256d to type __m128d.

_mm256_castpd_psx86-64 and avx

Cast vector of type __m256d to type __m256.

_mm256_castpd_si256x86-64 and avx

Casts vector of type __m256d to type __m256i.

_mm256_castps128_ps256x86-64 and avx

Casts vector of type __m128 to type __m256; the upper 128 bits of the result are undefined.

_mm256_castps256_ps128x86-64 and avx

Casts vector of type __m256 to type __m128.

_mm256_castps_pdx86-64 and avx

Cast vector of type __m256 to type __m256d.

_mm256_castps_si256x86-64 and avx

Casts vector of type __m256 to type __m256i.

_mm256_castsi128_si256x86-64 and avx

Casts vector of type __m128i to type __m256i; the upper 128 bits of the result are undefined.

_mm256_castsi256_pdx86-64 and avx

Casts vector of type __m256i to type __m256d.

_mm256_castsi256_psx86-64 and avx

Casts vector of type __m256i to type __m256.

_mm256_castsi256_si128x86-64 and avx

Casts vector of type __m256i to type __m128i.

_mm256_ceil_pdx86-64 and avx

Rounds packed double-precision (64-bit) floating point elements in a toward positive infinity.

_mm256_ceil_psx86-64 and avx

Rounds packed single-precision (32-bit) floating point elements in a toward positive infinity.

_mm256_cmp_pdx86-64 and avx

Compares packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm256_cmp_psx86-64 and avx

Compares packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm256_cmpeq_epi8x86-64 and avx2

Compares packed 8-bit integers in a and b for equality.

_mm256_cmpeq_epi16x86-64 and avx2

Compares packed 16-bit integers in a and b for equality.

_mm256_cmpeq_epi32x86-64 and avx2

Compares packed 32-bit integers in a and b for equality.

_mm256_cmpeq_epi64x86-64 and avx2

Compares packed 64-bit integers in a and b for equality.

_mm256_cmpgt_epi8x86-64 and avx2

Compares packed 8-bit integers in a and b for greater-than.

_mm256_cmpgt_epi16x86-64 and avx2

Compares packed 16-bit integers in a and b for greater-than.

_mm256_cmpgt_epi32x86-64 and avx2

Compares packed 32-bit integers in a and b for greater-than.

_mm256_cmpgt_epi64x86-64 and avx2

Compares packed 64-bit integers in a and b for greater-than.

_mm256_cvtepi8_epi16x86-64 and avx2

Sign-extend 8-bit integers to 16-bit integers.

_mm256_cvtepi8_epi32x86-64 and avx2

Sign-extend 8-bit integers to 32-bit integers.

_mm256_cvtepi8_epi64x86-64 and avx2

Sign-extend 8-bit integers to 64-bit integers.

_mm256_cvtepi16_epi32x86-64 and avx2

Sign-extend 16-bit integers to 32-bit integers.

_mm256_cvtepi16_epi64x86-64 and avx2

Sign-extend 16-bit integers to 64-bit integers.

_mm256_cvtepi32_epi64x86-64 and avx2

Sign-extend 32-bit integers to 64-bit integers.

_mm256_cvtepi32_pdx86-64 and avx

Converts packed 32-bit integers in a to packed double-precision (64-bit) floating-point elements.

_mm256_cvtepi32_psx86-64 and avx

Converts packed 32-bit integers in a to packed single-precision (32-bit) floating-point elements.

_mm256_cvtepu8_epi16x86-64 and avx2

Zero-extend unsigned 8-bit integers in a to 16-bit integers.

_mm256_cvtepu8_epi32x86-64 and avx2

Zero-extend the lower eight unsigned 8-bit integers in a to 32-bit integers. The upper eight elements of a are unused.

_mm256_cvtepu8_epi64x86-64 and avx2

Zero-extend the lower four unsigned 8-bit integers in a to 64-bit integers. The upper twelve elements of a are unused.

_mm256_cvtepu16_epi32x86-64 and avx2

Zeroes extend packed unsigned 16-bit integers in a to packed 32-bit integers, and stores the results in dst.

_mm256_cvtepu16_epi64x86-64 and avx2

Zero-extend the lower four unsigned 16-bit integers in a to 64-bit integers. The upper four elements of a are unused.

_mm256_cvtepu32_epi64x86-64 and avx2

Zero-extend unsigned 32-bit integers in a to 64-bit integers.

_mm256_cvtpd_epi32x86-64 and avx

Converts packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers.

_mm256_cvtpd_psx86-64 and avx

Converts packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements.

_mm256_cvtps_epi32x86-64 and avx

Converts packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers.

_mm256_cvtps_pdx86-64 and avx

Converts packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements.

_mm256_cvtsd_f64x86-64 and avx2

Returns the first element of the input vector of [4 x double].

_mm256_cvtsi256_si32x86-64 and avx2

Returns the first element of the input vector of [8 x i32].

_mm256_cvtss_f32x86-64 and avx

Returns the first element of the input vector of [8 x float].

_mm256_cvttpd_epi32x86-64 and avx

Converts packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm256_cvttps_epi32x86-64 and avx

Converts packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm256_div_pdx86-64 and avx

Computes the division of each of the 4 packed 64-bit floating-point elements in a by the corresponding packed elements in b.

_mm256_div_psx86-64 and avx

Computes the division of each of the 8 packed 32-bit floating-point elements in a by the corresponding packed elements in b.

_mm256_dp_psx86-64 and avx

Conditionally multiplies the packed single-precision (32-bit) floating-point elements in a and b using the high 4 bits in imm8, sum the four products, and conditionally return the sum using the low 4 bits of imm8.

_mm256_extract_epi8x86-64 and avx2

Extracts an 8-bit integer from a, selected with imm8. Returns a 32-bit integer containing the zero-extended integer data.

_mm256_extract_epi16x86-64 and avx2

Extracts a 16-bit integer from a, selected with imm8. Returns a 32-bit integer containing the zero-extended integer data.

_mm256_extract_epi32x86-64 and avx2

Extracts a 32-bit integer from a, selected with imm8.

_mm256_extract_epi64x86-64 and avx2

Extracts a 64-bit integer from a, selected with imm8.

_mm256_extractf128_pdx86-64 and avx

Extracts 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a, selected with imm8.

_mm256_extractf128_psx86-64 and avx

Extracts 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8.

_mm256_extractf128_si256x86-64 and avx

Extracts 128 bits (composed of integer data) from a, selected with imm8.

_mm256_extracti128_si256x86-64 and avx2

Extracts 128 bits (of integer data) from a selected with imm8.

_mm256_floor_pdx86-64 and avx

Rounds packed double-precision (64-bit) floating point elements in a toward negative infinity.

_mm256_floor_psx86-64 and avx

Rounds packed single-precision (32-bit) floating point elements in a toward negative infinity.

_mm256_fmadd_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm256_fmadd_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm256_fmaddsub_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm256_fmaddsub_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm256_fmsub_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm256_fmsub_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm256_fmsubadd_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm256_fmsubadd_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm256_fnmadd_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm256_fnmadd_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm256_fnmsub_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm256_fnmsub_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm256_hadd_epi16x86-64 and avx2

Horizontally adds adjacent pairs of 16-bit integers in a and b.

_mm256_hadd_epi32x86-64 and avx2

Horizontally adds adjacent pairs of 32-bit integers in a and b.

_mm256_hadd_pdx86-64 and avx

Horizontal addition of adjacent pairs in the two packed vectors of 4 64-bit floating points a and b. In the result, sums of elements from a are returned in even locations, while sums of elements from b are returned in odd locations.

_mm256_hadd_psx86-64 and avx

Horizontal addition of adjacent pairs in the two packed vectors of 8 32-bit floating points a and b. In the result, sums of elements from a are returned in locations of indices 0, 1, 4, 5; while sums of elements from b are locations 2, 3, 6, 7.

_mm256_hadds_epi16x86-64 and avx2

Horizontally adds adjacent pairs of 16-bit integers in a and b using saturation.

_mm256_hsub_epi16x86-64 and avx2

Horizontally subtract adjacent pairs of 16-bit integers in a and b.

_mm256_hsub_epi32x86-64 and avx2

Horizontally subtract adjacent pairs of 32-bit integers in a and b.

_mm256_hsub_pdx86-64 and avx

Horizontal subtraction of adjacent pairs in the two packed vectors of 4 64-bit floating points a and b. In the result, sums of elements from a are returned in even locations, while sums of elements from b are returned in odd locations.

_mm256_hsub_psx86-64 and avx

Horizontal subtraction of adjacent pairs in the two packed vectors of 8 32-bit floating points a and b. In the result, sums of elements from a are returned in locations of indices 0, 1, 4, 5; while sums of elements from b are locations 2, 3, 6, 7.

_mm256_hsubs_epi16x86-64 and avx2

Horizontally subtract adjacent pairs of 16-bit integers in a and b using saturation.

_mm256_i32gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i32gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i32gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i32gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_insert_epi8x86-64 and avx

Copies a to result, and inserts the 8-bit integer i into result at the location specified by index.

_mm256_insert_epi16x86-64 and avx

Copies a to result, and inserts the 16-bit integer i into result at the location specified by index.

_mm256_insert_epi32x86-64 and avx

Copies a to result, and inserts the 32-bit integer i into result at the location specified by index.

_mm256_insert_epi64x86-64 and avx

Copies a to result, and insert the 64-bit integer i into result at the location specified by index.

_mm256_insertf128_pdx86-64 and avx

Copies a to result, then inserts 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from b into result at the location specified by imm8.

_mm256_insertf128_psx86-64 and avx

Copies a to result, then inserts 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into result at the location specified by imm8.

_mm256_insertf128_si256x86-64 and avx

Copies a to result, then inserts 128 bits from b into result at the location specified by imm8.

_mm256_inserti128_si256x86-64 and avx2

Copies a to dst, then insert 128 bits (of integer data) from b at the location specified by imm8.

_mm256_lddqu_si256x86-64 and avx

Loads 256-bits of integer data from unaligned memory into result. This intrinsic may perform better than _mm256_loadu_si256 when the data crosses a cache line boundary.

_mm256_load_pdx86-64 and avx

Loads 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from memory into result. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_load_psx86-64 and avx

Loads 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from memory into result. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_load_si256x86-64 and avx

Loads 256-bits of integer data from memory into result. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_loadu2_m128x86-64 and avx,sse

Loads two 128-bit values (composed of 4 packed single-precision (32-bit) floating-point elements) from memory, and combine them into a 256-bit value. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_loadu2_m128dx86-64 and avx,sse2

Loads two 128-bit values (composed of 2 packed double-precision (64-bit) floating-point elements) from memory, and combine them into a 256-bit value. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_loadu2_m128ix86-64 and avx,sse2

Loads two 128-bit values (composed of integer data) from memory, and combine them into a 256-bit value. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_loadu_pdx86-64 and avx

Loads 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm256_loadu_psx86-64 and avx

Loads 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm256_loadu_si256x86-64 and avx

Loads 256-bits of integer data from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm256_madd_epi16x86-64 and avx2

Multiplies packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers.

_mm256_maddubs_epi16x86-64 and avx2

Vertically multiplies each unsigned 8-bit integer from a with the corresponding signed 8-bit integer from b, producing intermediate signed 16-bit integers. Horizontally add adjacent pairs of intermediate signed 16-bit integers

_mm256_mask_i32gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i32gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i32gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i32gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_maskload_epi32x86-64 and avx2

Loads packed 32-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm256_maskload_epi64x86-64 and avx2

Loads packed 64-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm256_maskload_pdx86-64 and avx

Loads packed double-precision (64-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm256_maskload_psx86-64 and avx

Loads packed single-precision (32-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm256_maskstore_epi32x86-64 and avx2

Stores packed 32-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm256_maskstore_epi64x86-64 and avx2

Stores packed 64-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm256_maskstore_pdx86-64 and avx

Stores packed double-precision (64-bit) floating-point elements from a into memory using mask.

_mm256_maskstore_psx86-64 and avx

Stores packed single-precision (32-bit) floating-point elements from a into memory using mask.

_mm256_max_epi8x86-64 and avx2

Compares packed 8-bit integers in a and b, and returns the packed maximum values.

_mm256_max_epi16x86-64 and avx2

Compares packed 16-bit integers in a and b, and returns the packed maximum values.

_mm256_max_epi32x86-64 and avx2

Compares packed 32-bit integers in a and b, and returns the packed maximum values.

_mm256_max_epu8x86-64 and avx2

Compares packed unsigned 8-bit integers in a and b, and returns the packed maximum values.

_mm256_max_epu16x86-64 and avx2

Compares packed unsigned 16-bit integers in a and b, and returns the packed maximum values.

_mm256_max_epu32x86-64 and avx2

Compares packed unsigned 32-bit integers in a and b, and returns the packed maximum values.

_mm256_max_pdx86-64 and avx

Compares packed double-precision (64-bit) floating-point elements in a and b, and returns packed maximum values

_mm256_max_psx86-64 and avx

Compares packed single-precision (32-bit) floating-point elements in a and b, and returns packed maximum values

_mm256_min_epi8x86-64 and avx2

Compares packed 8-bit integers in a and b, and returns the packed minimum values.

_mm256_min_epi16x86-64 and avx2

Compares packed 16-bit integers in a and b, and returns the packed minimum values.

_mm256_min_epi32x86-64 and avx2

Compares packed 32-bit integers in a and b, and returns the packed minimum values.

_mm256_min_epu8x86-64 and avx2

Compares packed unsigned 8-bit integers in a and b, and returns the packed minimum values.

_mm256_min_epu16x86-64 and avx2

Compares packed unsigned 16-bit integers in a and b, and returns the packed minimum values.

_mm256_min_epu32x86-64 and avx2

Compares packed unsigned 32-bit integers in a and b, and returns the packed minimum values.

_mm256_min_pdx86-64 and avx

Compares packed double-precision (64-bit) floating-point elements in a and b, and returns packed minimum values

_mm256_min_psx86-64 and avx

Compares packed single-precision (32-bit) floating-point elements in a and b, and returns packed minimum values

_mm256_movedup_pdx86-64 and avx

Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and returns the results.

_mm256_movehdup_psx86-64 and avx

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and returns the results.

_mm256_moveldup_psx86-64 and avx

Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and returns the results.

_mm256_movemask_epi8x86-64 and avx2

Creates mask from the most significant bit of each 8-bit element in a, return the result.

_mm256_movemask_pdx86-64 and avx

Sets each bit of the returned mask based on the most significant bit of the corresponding packed double-precision (64-bit) floating-point element in a.

_mm256_movemask_psx86-64 and avx

Sets each bit of the returned mask based on the most significant bit of the corresponding packed single-precision (32-bit) floating-point element in a.

_mm256_mpsadbw_epu8x86-64 and avx2

Computes the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and stores the 16-bit results in dst. Eight SADs are performed for each 128-bit lane using one quadruplet from b and eight quadruplets from a. One quadruplet is selected from b starting at on the offset specified in imm8. Eight quadruplets are formed from sequential 8-bit integers selected from a starting at the offset specified in imm8.

_mm256_mul_epi32x86-64 and avx2

Multiplies the low 32-bit integers from each packed 64-bit element in a and b

_mm256_mul_epu32x86-64 and avx2

Multiplies the low unsigned 32-bit integers from each packed 64-bit element in a and b

_mm256_mul_pdx86-64 and avx

Multiplies packed double-precision (64-bit) floating-point elements in a and b.

_mm256_mul_psx86-64 and avx

Multiplies packed single-precision (32-bit) floating-point elements in a and b.

_mm256_mulhi_epi16x86-64 and avx2

Multiplies the packed 16-bit integers in a and b, producing intermediate 32-bit integers and returning the high 16 bits of the intermediate integers.

_mm256_mulhi_epu16x86-64 and avx2

Multiplies the packed unsigned 16-bit integers in a and b, producing intermediate 32-bit integers and returning the high 16 bits of the intermediate integers.

_mm256_mulhrs_epi16x86-64 and avx2

Multiplies packed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Truncate each intermediate integer to the 18 most significant bits, round by adding 1, and return bits [16:1].

_mm256_mullo_epi16x86-64 and avx2

Multiplies the packed 16-bit integers in a and b, producing intermediate 32-bit integers, and returns the low 16 bits of the intermediate integers

_mm256_mullo_epi32x86-64 and avx2

Multiplies the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and returns the low 32 bits of the intermediate integers

_mm256_or_pdx86-64 and avx

Computes the bitwise OR packed double-precision (64-bit) floating-point elements in a and b.

_mm256_or_psx86-64 and avx

Computes the bitwise OR packed single-precision (32-bit) floating-point elements in a and b.

_mm256_or_si256x86-64 and avx2

Computes the bitwise OR of 256 bits (representing integer data) in a and b

_mm256_packs_epi16x86-64 and avx2

Converts packed 16-bit integers from a and b to packed 8-bit integers using signed saturation

_mm256_packs_epi32x86-64 and avx2

Converts packed 32-bit integers from a and b to packed 16-bit integers using signed saturation

_mm256_packus_epi16x86-64 and avx2

Converts packed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation

_mm256_packus_epi32x86-64 and avx2

Converts packed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation

_mm256_permute2f128_pdx86-64 and avx

Shuffles 256 bits (composed of 4 packed double-precision (64-bit) floating-point elements) selected by imm8 from a and b.

_mm256_permute2f128_psx86-64 and avx

Shuffles 256 bits (composed of 8 packed single-precision (32-bit) floating-point elements) selected by imm8 from a and b.

_mm256_permute2f128_si256x86-64 and avx

Shuffles 128-bits (composed of integer data) selected by imm8 from a and b.

_mm256_permute2x128_si256x86-64 and avx2

Shuffles 128-bits of integer data selected by imm8 from a and b.

_mm256_permute4x64_epi64x86-64 and avx2

Permutes 64-bit integers from a using control mask imm8.

_mm256_permute4x64_pdx86-64 and avx2

Shuffles 64-bit floating-point elements in a across lanes using the control in imm8.

_mm256_permute_pdx86-64 and avx

Shuffles double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8.

_mm256_permute_psx86-64 and avx

Shuffles single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8.

_mm256_permutevar8x32_epi32x86-64 and avx2

Permutes packed 32-bit integers from a according to the content of b.

_mm256_permutevar8x32_psx86-64 and avx2

Shuffles eight 32-bit foating-point elements in a across lanes using the corresponding 32-bit integer index in idx.

_mm256_permutevar_pdx86-64 and avx

Shuffles double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in b.

_mm256_permutevar_psx86-64 and avx

Shuffles single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b.

_mm256_rcp_psx86-64 and avx

Computes the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and returns the results. The maximum relative error for this approximation is less than 1.5*2^-12.

_mm256_round_pdx86-64 and avx

Rounds packed double-precision (64-bit) floating point elements in a according to the flag b. The value of b may be as follows:

_mm256_round_psx86-64 and avx

Rounds packed single-precision (32-bit) floating point elements in a according to the flag b. The value of b may be as follows:

_mm256_rsqrt_psx86-64 and avx

Computes the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and returns the results. The maximum relative error for this approximation is less than 1.5*2^-12.

_mm256_sad_epu8x86-64 and avx2

Computes the absolute differences of packed unsigned 8-bit integers in a and b, then horizontally sum each consecutive 8 differences to produce four unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low 16 bits of the 64-bit return value

_mm256_set1_epi8x86-64 and avx

Broadcasts 8-bit integer a to all elements of returned vector. This intrinsic may generate the vpbroadcastb.

_mm256_set1_epi16x86-64 and avx

Broadcasts 16-bit integer a to all all elements of returned vector. This intrinsic may generate the vpbroadcastw.

_mm256_set1_epi32x86-64 and avx

Broadcasts 32-bit integer a to all elements of returned vector. This intrinsic may generate the vpbroadcastd.

_mm256_set1_epi64xx86-64 and avx

Broadcasts 64-bit integer a to all elements of returned vector. This intrinsic may generate the vpbroadcastq.

_mm256_set1_pdx86-64 and avx

Broadcasts double-precision (64-bit) floating-point value a to all elements of returned vector.

_mm256_set1_psx86-64 and avx

Broadcasts single-precision (32-bit) floating-point value a to all elements of returned vector.

_mm256_set_epi8x86-64 and avx

Sets packed 8-bit integers in returned vector with the supplied values in reverse order.

_mm256_set_epi16x86-64 and avx

Sets packed 16-bit integers in returned vector with the supplied values.

_mm256_set_epi32x86-64 and avx

Sets packed 32-bit integers in returned vector with the supplied values.

_mm256_set_epi64xx86-64 and avx

Sets packed 64-bit integers in returned vector with the supplied values.

_mm256_set_m128x86-64 and avx

Sets packed __m256 returned vector with the supplied values.

_mm256_set_m128dx86-64 and avx

Sets packed __m256d returned vector with the supplied values.

_mm256_set_m128ix86-64 and avx

Sets packed __m256i returned vector with the supplied values.

_mm256_set_pdx86-64 and avx

Sets packed double-precision (64-bit) floating-point elements in returned vector with the supplied values.

_mm256_set_psx86-64 and avx

Sets packed single-precision (32-bit) floating-point elements in returned vector with the supplied values.

_mm256_setr_epi8x86-64 and avx

Sets packed 8-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_epi16x86-64 and avx

Sets packed 16-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_epi32x86-64 and avx

Sets packed 32-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_epi64xx86-64 and avx

Sets packed 64-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_m128x86-64 and avx

Sets packed __m256 returned vector with the supplied values.

_mm256_setr_m128dx86-64 and avx

Sets packed __m256d returned vector with the supplied values.

_mm256_setr_m128ix86-64 and avx

Sets packed __m256i returned vector with the supplied values.

_mm256_setr_pdx86-64 and avx

Sets packed double-precision (64-bit) floating-point elements in returned vector with the supplied values in reverse order.

_mm256_setr_psx86-64 and avx

Sets packed single-precision (32-bit) floating-point elements in returned vector with the supplied values in reverse order.

_mm256_setzero_pdx86-64 and avx

Returns vector of type __m256d with all elements set to zero.

_mm256_setzero_psx86-64 and avx

Returns vector of type __m256 with all elements set to zero.

_mm256_setzero_si256x86-64 and avx

Returns vector of type __m256i with all elements set to zero.

_mm256_shuffle_epi8x86-64 and avx2

Shuffles bytes from a according to the content of b.

_mm256_shuffle_epi32x86-64 and avx2

Shuffles 32-bit integers in 128-bit lanes of a using the control in imm8.

_mm256_shuffle_pdx86-64 and avx

Shuffles double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8.

_mm256_shuffle_psx86-64 and avx

Shuffles single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8.

_mm256_shufflehi_epi16x86-64 and avx2

Shuffles 16-bit integers in the high 64 bits of 128-bit lanes of a using the control in imm8. The low 64 bits of 128-bit lanes of a are copied to the output.

_mm256_shufflelo_epi16x86-64 and avx2

Shuffles 16-bit integers in the low 64 bits of 128-bit lanes of a using the control in imm8. The high 64 bits of 128-bit lanes of a are copied to the output.

_mm256_sign_epi8x86-64 and avx2

Negates packed 8-bit integers in a when the corresponding signed 8-bit integer in b is negative, and returns the results. Results are zeroed out when the corresponding element in b is zero.

_mm256_sign_epi16x86-64 and avx2

Negates packed 16-bit integers in a when the corresponding signed 16-bit integer in b is negative, and returns the results. Results are zeroed out when the corresponding element in b is zero.

_mm256_sign_epi32x86-64 and avx2

Negates packed 32-bit integers in a when the corresponding signed 32-bit integer in b is negative, and returns the results. Results are zeroed out when the corresponding element in b is zero.

_mm256_sll_epi16x86-64 and avx2

Shifts packed 16-bit integers in a left by count while shifting in zeros, and returns the result

_mm256_sll_epi32x86-64 and avx2

Shifts packed 32-bit integers in a left by count while shifting in zeros, and returns the result

_mm256_sll_epi64x86-64 and avx2

Shifts packed 64-bit integers in a left by count while shifting in zeros, and returns the result

_mm256_slli_epi16x86-64 and avx2

Shifts packed 16-bit integers in a left by imm8 while shifting in zeros, return the results;

_mm256_slli_epi32x86-64 and avx2

Shifts packed 32-bit integers in a left by imm8 while shifting in zeros, return the results;

_mm256_slli_epi64x86-64 and avx2

Shifts packed 64-bit integers in a left by imm8 while shifting in zeros, return the results;

_mm256_slli_si256x86-64 and avx2

Shifts 128-bit lanes in a left by imm8 bytes while shifting in zeros.

_mm256_sllv_epi32x86-64 and avx2

Shifts packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and returns the result.

_mm256_sllv_epi64x86-64 and avx2

Shifts packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and returns the result.

_mm256_sqrt_pdx86-64 and avx

Returns the square root of packed double-precision (64-bit) floating point elements in a.

_mm256_sqrt_psx86-64 and avx

Returns the square root of packed single-precision (32-bit) floating point elements in a.

_mm256_sra_epi16x86-64 and avx2

Shifts packed 16-bit integers in a right by count while shifting in sign bits.

_mm256_sra_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by count while shifting in sign bits.

_mm256_srai_epi16x86-64 and avx2

Shifts packed 16-bit integers in a right by imm8 while shifting in sign bits.

_mm256_srai_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by imm8 while shifting in sign bits.

_mm256_srav_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits.

_mm256_srl_epi16x86-64 and avx2

Shifts packed 16-bit integers in a right by count while shifting in zeros.

_mm256_srl_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by count while shifting in zeros.

_mm256_srl_epi64x86-64 and avx2

Shifts packed 64-bit integers in a right by count while shifting in zeros.

_mm256_srli_epi16x86-64 and avx2

Shifts packed 16-bit integers in a right by imm8 while shifting in zeros

_mm256_srli_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by imm8 while shifting in zeros

_mm256_srli_epi64x86-64 and avx2

Shifts packed 64-bit integers in a right by imm8 while shifting in zeros

_mm256_srli_si256x86-64 and avx2

Shifts 128-bit lanes in a right by imm8 bytes while shifting in zeros.

_mm256_srlv_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm256_srlv_epi64x86-64 and avx2

Shifts packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm256_store_pdx86-64 and avx

Stores 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_store_psx86-64 and avx

Stores 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_store_si256x86-64 and avx

Stores 256-bits of integer data from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_storeu2_m128x86-64 and avx,sse

Stores the high and low 128-bit halves (each composed of 4 packed single-precision (32-bit) floating-point elements) from a into memory two different 128-bit locations. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_storeu2_m128dx86-64 and avx,sse2

Stores the high and low 128-bit halves (each composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory two different 128-bit locations. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_storeu2_m128ix86-64 and avx,sse2

Stores the high and low 128-bit halves (each composed of integer data) from a into memory two different 128-bit locations. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_storeu_pdx86-64 and avx

Stores 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm256_storeu_psx86-64 and avx

Stores 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm256_storeu_si256x86-64 and avx

Stores 256-bits of integer data from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm256_stream_pdx86-64 and avx

Moves double-precision values from a 256-bit vector of [4 x double] to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm256_stream_psx86-64 and avx

Moves single-precision floating point values from a 256-bit vector of [8 x float] to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm256_stream_si256x86-64 and avx

Moves integer data from a 256-bit integer vector to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon)

_mm256_sub_epi8x86-64 and avx2

Subtract packed 8-bit integers in b from packed 8-bit integers in a

_mm256_sub_epi16x86-64 and avx2

Subtract packed 16-bit integers in b from packed 16-bit integers in a

_mm256_sub_epi32x86-64 and avx2

Subtract packed 32-bit integers in b from packed 32-bit integers in a

_mm256_sub_epi64x86-64 and avx2

Subtract packed 64-bit integers in b from packed 64-bit integers in a

_mm256_sub_pdx86-64 and avx

Subtracts packed double-precision (64-bit) floating-point elements in b from packed elements in a.

_mm256_sub_psx86-64 and avx

Subtracts packed single-precision (32-bit) floating-point elements in b from packed elements in a.

_mm256_subs_epi8x86-64 and avx2

Subtract packed 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm256_subs_epi16x86-64 and avx2

Subtract packed 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm256_subs_epu8x86-64 and avx2

Subtract packed unsigned 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm256_subs_epu16x86-64 and avx2

Subtract packed unsigned 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm256_testc_pdx86-64 and avx

Computes the bitwise AND of 256 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm256_testc_psx86-64 and avx

Computes the bitwise AND of 256 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm256_testc_si256x86-64 and avx

Computes the bitwise AND of 256 bits (representing integer data) in a and b, and set ZF to 1 if the result is zero, otherwise set ZF to 0. Computes the bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, otherwise set CF to 0. Return the CF value.

_mm256_testnzc_pdx86-64 and avx

Computes the bitwise AND of 256 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm256_testnzc_psx86-64 and avx

Computes the bitwise AND of 256 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm256_testnzc_si256x86-64 and avx

Computes the bitwise AND of 256 bits (representing integer data) in a and b, and set ZF to 1 if the result is zero, otherwise set ZF to 0. Computes the bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm256_testz_pdx86-64 and avx

Computes the bitwise AND of 256 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm256_testz_psx86-64 and avx

Computes the bitwise AND of 256 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm256_testz_si256x86-64 and avx

Computes the bitwise AND of 256 bits (representing integer data) in a and b, and set ZF to 1 if the result is zero, otherwise set ZF to 0. Computes the bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, otherwise set CF to 0. Return the ZF value.

_mm256_undefined_pdx86-64 and avx

Returns vector of type __m256d with undefined elements.

_mm256_undefined_psx86-64 and avx

Returns vector of type __m256 with undefined elements.

_mm256_undefined_si256x86-64 and avx

Returns vector of type __m256i with undefined elements.

_mm256_unpackhi_epi8x86-64 and avx2

Unpacks and interleave 8-bit integers from the high half of each 128-bit lane in a and b.

_mm256_unpackhi_epi16x86-64 and avx2

Unpacks and interleave 16-bit integers from the high half of each 128-bit lane of a and b.

_mm256_unpackhi_epi32x86-64 and avx2

Unpacks and interleave 32-bit integers from the high half of each 128-bit lane of a and b.

_mm256_unpackhi_epi64x86-64 and avx2

Unpacks and interleave 64-bit integers from the high half of each 128-bit lane of a and b.

_mm256_unpackhi_pdx86-64 and avx

Unpacks and interleave double-precision (64-bit) floating-point elements from the high half of each 128-bit lane in a and b.

_mm256_unpackhi_psx86-64 and avx

Unpacks and interleave single-precision (32-bit) floating-point elements from the high half of each 128-bit lane in a and b.

_mm256_unpacklo_epi8x86-64 and avx2

Unpacks and interleave 8-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_epi16x86-64 and avx2

Unpacks and interleave 16-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_epi32x86-64 and avx2

Unpacks and interleave 32-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_epi64x86-64 and avx2

Unpacks and interleave 64-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_pdx86-64 and avx

Unpacks and interleave double-precision (64-bit) floating-point elements from the low half of each 128-bit lane in a and b.

_mm256_unpacklo_psx86-64 and avx

Unpacks and interleave single-precision (32-bit) floating-point elements from the low half of each 128-bit lane in a and b.

_mm256_xor_pdx86-64 and avx

Computes the bitwise XOR of packed double-precision (64-bit) floating-point elements in a and b.

_mm256_xor_psx86-64 and avx

Computes the bitwise XOR of packed single-precision (32-bit) floating-point elements in a and b.

_mm256_xor_si256x86-64 and avx2

Computes the bitwise XOR of 256 bits (representing integer data) in a and b

_mm256_zeroallx86-64 and avx

Zeroes the contents of all XMM or YMM registers.

_mm256_zeroupperx86-64 and avx

Zeroes the upper 128 bits of all YMM registers; the lower 128-bits of the registers are unmodified.

_mm256_zextpd128_pd256x86-64 and avx,sse2

Constructs a 256-bit floating-point vector of [4 x double] from a 128-bit floating-point vector of [2 x double]. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.

_mm256_zextps128_ps256x86-64 and avx,sse

Constructs a 256-bit floating-point vector of [8 x float] from a 128-bit floating-point vector of [4 x float]. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.

_mm256_zextsi128_si256x86-64 and avx,sse2

Constructs a 256-bit integer vector from a 128-bit integer vector. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.

_mm512_storeu_psx86-64 and avx512f

Stores 512-bits (composed of 16 packed single-precision (32-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm_abs_epi8x86-64 and ssse3

Computes the absolute value of packed 8-bit signed integers in a and return the unsigned results.

_mm_abs_epi16x86-64 and ssse3

Computes the absolute value of each of the packed 16-bit signed integers in a and return the 16-bit unsigned integer

_mm_abs_epi32x86-64 and ssse3

Computes the absolute value of each of the packed 32-bit signed integers in a and return the 32-bit unsigned integer

_mm_add_epi8x86-64 and sse2

Adds packed 8-bit integers in a and b.

_mm_add_epi16x86-64 and sse2

Adds packed 16-bit integers in a and b.

_mm_add_epi32x86-64 and sse2

Adds packed 32-bit integers in a and b.

_mm_add_epi64x86-64 and sse2

Adds packed 64-bit integers in a and b.

_mm_add_pdx86-64 and sse2

Adds packed double-precision (64-bit) floating-point elements in a and b.

_mm_add_psx86-64 and sse

Adds __m128 vectors.

_mm_add_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the sum of the low elements of a and b.

_mm_add_ssx86-64 and sse

Adds the first component of a and b, the other components are copied from a.

_mm_adds_epi8x86-64 and sse2

Adds packed 8-bit integers in a and b using saturation.

_mm_adds_epi16x86-64 and sse2

Adds packed 16-bit integers in a and b using saturation.

_mm_adds_epu8x86-64 and sse2

Adds packed unsigned 8-bit integers in a and b using saturation.

_mm_adds_epu16x86-64 and sse2

Adds packed unsigned 16-bit integers in a and b using saturation.

_mm_addsub_pdx86-64 and sse3

Alternatively add and subtract packed double-precision (64-bit) floating-point elements in a to/from packed elements in b.

_mm_addsub_psx86-64 and sse3

Alternatively add and subtract packed single-precision (32-bit) floating-point elements in a to/from packed elements in b.

_mm_aesdec_si128x86-64 and aes

Performs one round of an AES decryption flow on data (state) in a.

_mm_aesdeclast_si128x86-64 and aes

Performs the last round of an AES decryption flow on data (state) in a.

_mm_aesenc_si128x86-64 and aes

Performs one round of an AES encryption flow on data (state) in a.

_mm_aesenclast_si128x86-64 and aes

Performs the last round of an AES encryption flow on data (state) in a.

_mm_aesimc_si128x86-64 and aes

Performs the InvMixColumns transformation on a.

_mm_aeskeygenassist_si128x86-64 and aes

Assist in expanding the AES cipher key.

_mm_alignr_epi8x86-64 and ssse3

Concatenate 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by n bytes, and returns the low 16 bytes.

_mm_and_pdx86-64 and sse2

Computes the bitwise AND of packed double-precision (64-bit) floating-point elements in a and b.

_mm_and_psx86-64 and sse

Bitwise AND of packed single-precision (32-bit) floating-point elements.

_mm_and_si128x86-64 and sse2

Computes the bitwise AND of 128 bits (representing integer data) in a and b.

_mm_andnot_pdx86-64 and sse2

Computes the bitwise NOT of a and then AND with b.

_mm_andnot_psx86-64 and sse

Bitwise AND-NOT of packed single-precision (32-bit) floating-point elements.

_mm_andnot_si128x86-64 and sse2

Computes the bitwise NOT of 128 bits (representing integer data) in a and then AND with b.

_mm_avg_epu8x86-64 and sse2

Averages packed unsigned 8-bit integers in a and b.

_mm_avg_epu16x86-64 and sse2

Averages packed unsigned 16-bit integers in a and b.

_mm_blend_epi16x86-64 and sse4.1

Blend packed 16-bit integers from a and b using the mask imm8.

_mm_blend_epi32x86-64 and avx2

Blends packed 32-bit integers from a and b using control mask imm8.

_mm_blend_pdx86-64 and sse4.1

Blend packed double-precision (64-bit) floating-point elements from a and b using control mask imm2

_mm_blend_psx86-64 and sse4.1

Blend packed single-precision (32-bit) floating-point elements from a and b using mask imm4

_mm_blendv_epi8x86-64 and sse4.1

Blend packed 8-bit integers from a and b using mask

_mm_blendv_pdx86-64 and sse4.1

Blend packed double-precision (64-bit) floating-point elements from a and b using mask

_mm_blendv_psx86-64 and sse4.1

Blend packed single-precision (32-bit) floating-point elements from a and b using mask

_mm_broadcast_ssx86-64 and avx

Broadcasts a single-precision (32-bit) floating-point element from memory to all elements of the returned vector.

_mm_broadcastb_epi8x86-64 and avx2

Broadcasts the low packed 8-bit integer from a to all elements of the 128-bit returned value.

_mm_broadcastd_epi32x86-64 and avx2

Broadcasts the low packed 32-bit integer from a to all elements of the 128-bit returned value.

_mm_broadcastq_epi64x86-64 and avx2

Broadcasts the low packed 64-bit integer from a to all elements of the 128-bit returned value.

_mm_broadcastsd_pdx86-64 and avx2

Broadcasts the low double-precision (64-bit) floating-point element from a to all elements of the 128-bit returned value.

_mm_broadcastss_psx86-64 and avx2

Broadcasts the low single-precision (32-bit) floating-point element from a to all elements of the 128-bit returned value.

_mm_broadcastw_epi16x86-64 and avx2

Broadcasts the low packed 16-bit integer from a to all elements of the 128-bit returned value

_mm_bslli_si128x86-64 and sse2

Shifts a left by imm8 bytes while shifting in zeros.

_mm_bsrli_si128x86-64 and sse2

Shifts a right by imm8 bytes while shifting in zeros.

_mm_castpd_psx86-64 and sse2

Casts a 128-bit floating-point vector of [2 x double] into a 128-bit floating-point vector of [4 x float].

_mm_castpd_si128x86-64 and sse2

Casts a 128-bit floating-point vector of [2 x double] into a 128-bit integer vector.

_mm_castps_pdx86-64 and sse2

Casts a 128-bit floating-point vector of [4 x float] into a 128-bit floating-point vector of [2 x double].

_mm_castps_si128x86-64 and sse2

Casts a 128-bit floating-point vector of [4 x float] into a 128-bit integer vector.

_mm_castsi128_pdx86-64 and sse2

Casts a 128-bit integer vector into a 128-bit floating-point vector of [2 x double].

_mm_castsi128_psx86-64 and sse2

Casts a 128-bit integer vector into a 128-bit floating-point vector of [4 x float].

_mm_ceil_pdx86-64 and sse4.1

Round the packed double-precision (64-bit) floating-point elements in a up to an integer value, and stores the results as packed double-precision floating-point elements.

_mm_ceil_psx86-64 and sse4.1

Round the packed single-precision (32-bit) floating-point elements in a up to an integer value, and stores the results as packed single-precision floating-point elements.

_mm_ceil_sdx86-64 and sse4.1

Round the lower double-precision (64-bit) floating-point element in b up to an integer value, store the result as a double-precision floating-point element in the lower element of the intrisic result, and copies the upper element from a to the upper element of the intrinsic result.

_mm_ceil_ssx86-64 and sse4.1

Round the lower single-precision (32-bit) floating-point element in b up to an integer value, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copies the upper 3 packed elements from a to the upper elements of the intrinsic result.

_mm_clflushx86-64 and sse2

Invalidates and flushes the cache line that contains p from all levels of the cache hierarchy.

_mm_clmulepi64_si128x86-64 and pclmulqdq

Performs a carry-less multiplication of two 64-bit polynomials over the finite field GF(2^k).

_mm_cmp_pdx86-64 and avx,sse2

Compares packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm_cmp_psx86-64 and avx,sse

Compares packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm_cmp_sdx86-64 and avx,sse2

Compares the lower double-precision (64-bit) floating-point element in a and b based on the comparison operand specified by imm8, store the result in the lower element of returned vector, and copies the upper element from a to the upper element of returned vector.

_mm_cmp_ssx86-64 and avx,sse

Compares the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, store the result in the lower element of returned vector, and copies the upper 3 packed elements from a to the upper elements of returned vector.

_mm_cmpeq_epi8x86-64 and sse2

Compares packed 8-bit integers in a and b for equality.

_mm_cmpeq_epi16x86-64 and sse2

Compares packed 16-bit integers in a and b for equality.

_mm_cmpeq_epi32x86-64 and sse2

Compares packed 32-bit integers in a and b for equality.

_mm_cmpeq_epi64x86-64 and sse4.1

Compares packed 64-bit integers in a and b for equality

_mm_cmpeq_pdx86-64 and sse2

Compares corresponding elements in a and b for equality.

_mm_cmpeq_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input elements were equal, or 0 otherwise.

_mm_cmpeq_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the equality comparison of the lower elements of a and b.

_mm_cmpeq_ssx86-64 and sse

Compares the lowest f32 of both inputs for equality. The lowest 32 bits of the result will be 0xffffffff if the two inputs are equal, or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpestrax86-64 and sse4.2

Compares packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if b did not contain a null character and the resulting mask was zero, and 0 otherwise.

_mm_cmpestrcx86-64 and sse4.2

Compares packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if the resulting mask was non-zero, and 0 otherwise.

_mm_cmpestrix86-64 and sse4.2

Compares packed strings a and b with lengths la and lb using the control in imm8 and return the generated index. Similar to _mm_cmpistri with the exception that _mm_cmpistri implicitly determines the length of a and b.

_mm_cmpestrmx86-64 and sse4.2

Compares packed strings in a and b with lengths la and lb using the control in imm8, and return the generated mask.

_mm_cmpestrox86-64 and sse4.2

Compares packed strings in a and b with lengths la and lb using the control in imm8, and return bit 0 of the resulting bit mask.

_mm_cmpestrsx86-64 and sse4.2

Compares packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if any character in a was null, and 0 otherwise.

_mm_cmpestrzx86-64 and sse4.2

Compares packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if any character in b was null, and 0 otherwise.

_mm_cmpge_pdx86-64 and sse2

Compares corresponding elements in a and b for greater-than-or-equal.

_mm_cmpge_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is greater than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmpge_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the greater-than-or-equal comparison of the lower elements of a and b.

_mm_cmpge_ssx86-64 and sse

Compares the lowest f32 of both inputs for greater than or equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is greater than or equal b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpgt_epi8x86-64 and sse2

Compares packed 8-bit integers in a and b for greater-than.

_mm_cmpgt_epi16x86-64 and sse2

Compares packed 16-bit integers in a and b for greater-than.

_mm_cmpgt_epi32x86-64 and sse2

Compares packed 32-bit integers in a and b for greater-than.

_mm_cmpgt_epi64x86-64 and sse4.2

Compares packed 64-bit integers in a and b for greater-than, return the results.

_mm_cmpgt_pdx86-64 and sse2

Compares corresponding elements in a and b for greater-than.

_mm_cmpgt_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is greater than the corresponding element in b, or 0 otherwise.

_mm_cmpgt_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the greater-than comparison of the lower elements of a and b.

_mm_cmpgt_ssx86-64 and sse

Compares the lowest f32 of both inputs for greater than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is greater than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpistrax86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8, and return 1 if b did not contain a null character and the resulting mask was zero, and 0 otherwise.

_mm_cmpistrcx86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8, and return 1 if the resulting mask was non-zero, and 0 otherwise.

_mm_cmpistrix86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8 and return the generated index. Similar to _mm_cmpestri with the exception that _mm_cmpestri requires the lengths of a and b to be explicitly specified.

_mm_cmpistrmx86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8, and return the generated mask.

_mm_cmpistrox86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8, and return bit 0 of the resulting bit mask.

_mm_cmpistrsx86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8, and returns 1 if any character in a was null, and 0 otherwise.

_mm_cmpistrzx86-64 and sse4.2

Compares packed strings with implicit lengths in a and b using the control in imm8, and return 1 if any character in b was null. and 0 otherwise.

_mm_cmple_pdx86-64 and sse2

Compares corresponding elements in a and b for less-than-or-equal

_mm_cmple_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is less than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmple_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the less-than-or-equal comparison of the lower elements of a and b.

_mm_cmple_ssx86-64 and sse

Compares the lowest f32 of both inputs for less than or equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is less than or equal b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmplt_epi8x86-64 and sse2

Compares packed 8-bit integers in a and b for less-than.

_mm_cmplt_epi16x86-64 and sse2

Compares packed 16-bit integers in a and b for less-than.

_mm_cmplt_epi32x86-64 and sse2

Compares packed 32-bit integers in a and b for less-than.

_mm_cmplt_pdx86-64 and sse2

Compares corresponding elements in a and b for less-than.

_mm_cmplt_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is less than the corresponding element in b, or 0 otherwise.

_mm_cmplt_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the less-than comparison of the lower elements of a and b.

_mm_cmplt_ssx86-64 and sse

Compares the lowest f32 of both inputs for less than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is less than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpneq_pdx86-64 and sse2

Compares corresponding elements in a and b for not-equal.

_mm_cmpneq_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input elements are not equal, or 0 otherwise.

_mm_cmpneq_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the not-equal comparison of the lower elements of a and b.

_mm_cmpneq_ssx86-64 and sse

Compares the lowest f32 of both inputs for inequality. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not equal to b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpnge_pdx86-64 and sse2

Compares corresponding elements in a and b for not-greater-than-or-equal.

_mm_cmpnge_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not greater than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmpnge_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the not-greater-than-or-equal comparison of the lower elements of a and b.

_mm_cmpnge_ssx86-64 and sse

Compares the lowest f32 of both inputs for not-greater-than-or-equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not greater than or equal to b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpngt_pdx86-64 and sse2

Compares corresponding elements in a and b for not-greater-than.

_mm_cmpngt_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not greater than the corresponding element in b, or 0 otherwise.

_mm_cmpngt_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the not-greater-than comparison of the lower elements of a and b.

_mm_cmpngt_ssx86-64 and sse

Compares the lowest f32 of both inputs for not-greater-than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not greater than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpnle_pdx86-64 and sse2

Compares corresponding elements in a and b for not-less-than-or-equal.

_mm_cmpnle_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not less than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmpnle_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the not-less-than-or-equal comparison of the lower elements of a and b.

_mm_cmpnle_ssx86-64 and sse

Compares the lowest f32 of both inputs for not-less-than-or-equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not less than or equal to b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpnlt_pdx86-64 and sse2

Compares corresponding elements in a and b for not-less-than.

_mm_cmpnlt_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not less than the corresponding element in b, or 0 otherwise.

_mm_cmpnlt_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the not-less-than comparison of the lower elements of a and b.

_mm_cmpnlt_ssx86-64 and sse

Compares the lowest f32 of both inputs for not-less-than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not less than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpord_pdx86-64 and sse2

Compares corresponding elements in a and b to see if neither is NaN.

_mm_cmpord_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. Returns four floats that have one of two possible bit patterns. The element in the output vector will be 0xffffffff if the input elements in a and b are ordered (i.e., neither of them is a NaN), or 0 otherwise.

_mm_cmpord_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the result of comparing both of the lower elements of a and b to NaN. If neither are equal to NaN then 0xFFFFFFFFFFFFFFFF is used and 0 otherwise.

_mm_cmpord_ssx86-64 and sse

Checks if the lowest f32 of both inputs are ordered. The lowest 32 bits of the result will be 0xffffffff if neither of a.extract(0) or b.extract(0) is a NaN, or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpunord_pdx86-64 and sse2

Compares corresponding elements in a and b to see if either is NaN.

_mm_cmpunord_psx86-64 and sse

Compares each of the four floats in a to the corresponding element in b. Returns four floats that have one of two possible bit patterns. The element in the output vector will be 0xffffffff if the input elements in a and b are unordered (i.e., at least on of them is a NaN), or 0 otherwise.

_mm_cmpunord_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the result of comparing both of the lower elements of a and b to NaN. If either is equal to NaN then 0xFFFFFFFFFFFFFFFF is used and 0 otherwise.

_mm_cmpunord_ssx86-64 and sse

Checks if the lowest f32 of both inputs are unordered. The lowest 32 bits of the result will be 0xffffffff if any of a.extract(0) or b.extract(0) is a NaN, or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_comieq_sdx86-64 and sse2

Compares the lower element of a and b for equality.

_mm_comieq_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if they are equal, or 0 otherwise.

_mm_comige_sdx86-64 and sse2

Compares the lower element of a and b for greater-than-or-equal.

_mm_comige_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than or equal to the one from b, or 0 otherwise.

_mm_comigt_sdx86-64 and sse2

Compares the lower element of a and b for greater-than.

_mm_comigt_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than the one from b, or 0 otherwise.

_mm_comile_sdx86-64 and sse2

Compares the lower element of a and b for less-than-or-equal.

_mm_comile_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than or equal to the one from b, or 0 otherwise.

_mm_comilt_sdx86-64 and sse2

Compares the lower element of a and b for less-than.

_mm_comilt_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than the one from b, or 0 otherwise.

_mm_comineq_sdx86-64 and sse2

Compares the lower element of a and b for not-equal.

_mm_comineq_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if they are not equal, or 0 otherwise.

_mm_crc32_u8x86-64 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 8-bit integer v.

_mm_crc32_u16x86-64 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 16-bit integer v.

_mm_crc32_u32x86-64 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 32-bit integer v.

_mm_crc32_u64x86-64 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 64-bit integer v.

_mm_cvt_si2ssx86-64 and sse

Alias for _mm_cvtsi32_ss.

_mm_cvt_ss2six86-64 and sse

Alias for _mm_cvtss_si32.

_mm_cvtepi8_epi16x86-64 and sse4.1

Sign extend packed 8-bit integers in a to packed 16-bit integers

_mm_cvtepi8_epi32x86-64 and sse4.1

Sign extend packed 8-bit integers in a to packed 32-bit integers

_mm_cvtepi8_epi64x86-64 and sse4.1

Sign extend packed 8-bit integers in the low 8 bytes of a to packed 64-bit integers

_mm_cvtepi16_epi32x86-64 and sse4.1

Sign extend packed 16-bit integers in a to packed 32-bit integers

_mm_cvtepi16_epi64x86-64 and sse4.1

Sign extend packed 16-bit integers in a to packed 64-bit integers

_mm_cvtepi32_epi64x86-64 and sse4.1

Sign extend packed 32-bit integers in a to packed 64-bit integers

_mm_cvtepi32_pdx86-64 and sse2

Converts the lower two packed 32-bit integers in a to packed double-precision (64-bit) floating-point elements.

_mm_cvtepi32_psx86-64 and sse2

Converts packed 32-bit integers in a to packed single-precision (32-bit) floating-point elements.

_mm_cvtepu8_epi16x86-64 and sse4.1

Zeroes extend packed unsigned 8-bit integers in a to packed 16-bit integers

_mm_cvtepu8_epi32x86-64 and sse4.1

Zeroes extend packed unsigned 8-bit integers in a to packed 32-bit integers

_mm_cvtepu8_epi64x86-64 and sse4.1

Zeroes extend packed unsigned 8-bit integers in a to packed 64-bit integers

_mm_cvtepu16_epi32x86-64 and sse4.1

Zeroes extend packed unsigned 16-bit integers in a to packed 32-bit integers

_mm_cvtepu16_epi64x86-64 and sse4.1

Zeroes extend packed unsigned 16-bit integers in a to packed 64-bit integers

_mm_cvtepu32_epi64x86-64 and sse4.1

Zeroes extend packed unsigned 32-bit integers in a to packed 64-bit integers

_mm_cvtpd_epi32x86-64 and sse2

Converts packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers.

_mm_cvtpd_psx86-64 and sse2

Converts packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements

_mm_cvtps_epi32x86-64 and sse2

Converts packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers.

_mm_cvtps_pdx86-64 and sse2

Converts packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements.

_mm_cvtsd_f64x86-64 and sse2

Returns the lower double-precision (64-bit) floating-point element of a.

_mm_cvtsd_si32x86-64 and sse2

Converts the lower double-precision (64-bit) floating-point element in a to a 32-bit integer.

_mm_cvtsd_si64x86-64 and sse2

Converts the lower double-precision (64-bit) floating-point element in a to a 64-bit integer.

_mm_cvtsd_si64xx86-64 and sse2

Alias for _mm_cvtsd_si64

_mm_cvtsd_ssx86-64 and sse2

Converts the lower double-precision (64-bit) floating-point element in b to a single-precision (32-bit) floating-point element, store the result in the lower element of the return value, and copies the upper element from a to the upper element the return value.

_mm_cvtsi32_sdx86-64 and sse2

Returns a with its lower element replaced by b after converting it to an f64.

_mm_cvtsi32_si128x86-64 and sse2

Returns a vector whose lowest element is a and all higher elements are 0.

_mm_cvtsi32_ssx86-64 and sse

Converts a 32 bit integer to a 32 bit float. The result vector is the input vector a with the lowest 32 bit float replaced by the converted integer.

_mm_cvtsi64_sdx86-64 and sse2

Returns a with its lower element replaced by b after converting it to an f64.

_mm_cvtsi64_si128x86-64 and sse2

Returns a vector whose lowest element is a and all higher elements are 0.

_mm_cvtsi64_ssx86-64 and sse

Converts a 64 bit integer to a 32 bit float. The result vector is the input vector a with the lowest 32 bit float replaced by the converted integer.

_mm_cvtsi64x_sdx86-64 and sse2

Returns a with its lower element replaced by b after converting it to an f64.

_mm_cvtsi64x_si128x86-64 and sse2

Returns a vector whose lowest element is a and all higher elements are 0.

_mm_cvtsi128_si32x86-64 and sse2

Returns the lowest element of a.

_mm_cvtsi128_si64x86-64 and sse2

Returns the lowest element of a.

_mm_cvtsi128_si64xx86-64 and sse2

Returns the lowest element of a.

_mm_cvtss_f32x86-64 and sse

Extracts the lowest 32 bit float from the input vector.

_mm_cvtss_sdx86-64 and sse2

Converts the lower single-precision (32-bit) floating-point element in b to a double-precision (64-bit) floating-point element, store the result in the lower element of the return value, and copies the upper element from a to the upper element the return value.

_mm_cvtss_si32x86-64 and sse

Converts the lowest 32 bit float in the input vector to a 32 bit integer.

_mm_cvtss_si64x86-64 and sse

Converts the lowest 32 bit float in the input vector to a 64 bit integer.

_mm_cvtt_ss2six86-64 and sse

Alias for _mm_cvttss_si32.

_mm_cvttpd_epi32x86-64 and sse2

Converts packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm_cvttps_epi32x86-64 and sse2

Converts packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm_cvttsd_si32x86-64 and sse2

Converts the lower double-precision (64-bit) floating-point element in a to a 32-bit integer with truncation.

_mm_cvttsd_si64x86-64 and sse2

Converts the lower double-precision (64-bit) floating-point element in a to a 64-bit integer with truncation.

_mm_cvttsd_si64xx86-64 and sse2

Alias for _mm_cvttsd_si64

_mm_cvttss_si32x86-64 and sse

Converts the lowest 32 bit float in the input vector to a 32 bit integer with truncation.

_mm_cvttss_si64x86-64 and sse

Converts the lowest 32 bit float in the input vector to a 64 bit integer with truncation.

_mm_div_pdx86-64 and sse2

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b.

_mm_div_psx86-64 and sse

Divides __m128 vectors.

_mm_div_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the result of diving the lower element of a by the lower element of b.

_mm_div_ssx86-64 and sse

Divides the first component of b by a, the other components are copied from a.

_mm_dp_pdx86-64 and sse4.1

Returns the dot product of two __m128d vectors.

_mm_dp_psx86-64 and sse4.1

Returns the dot product of two __m128 vectors.

_mm_extract_epi8x86-64 and sse4.1

Extracts an 8-bit integer from a, selected with imm8. Returns a 32-bit integer containing the zero-extended integer data.

_mm_extract_epi16x86-64 and sse2

Returns the imm8 element of a.

_mm_extract_epi32x86-64 and sse4.1

Extracts an 32-bit integer from a selected with imm8

_mm_extract_epi64x86-64 and sse4.1

Extracts an 64-bit integer from a selected with imm8

_mm_extract_psx86-64 and sse4.1

Extracts a single-precision (32-bit) floating-point element from a, selected with imm8

_mm_extract_si64x86-64 and sse4a

Extracts the bit range specified by y from the lower 64 bits of x.

_mm_floor_pdx86-64 and sse4.1

Round the packed double-precision (64-bit) floating-point elements in a down to an integer value, and stores the results as packed double-precision floating-point elements.

_mm_floor_psx86-64 and sse4.1

Round the packed single-precision (32-bit) floating-point elements in a down to an integer value, and stores the results as packed single-precision floating-point elements.

_mm_floor_sdx86-64 and sse4.1

Round the lower double-precision (64-bit) floating-point element in b down to an integer value, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copies the upper element from a to the upper element of the intrinsic result.

_mm_floor_ssx86-64 and sse4.1

Round the lower single-precision (32-bit) floating-point element in b down to an integer value, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copies the upper 3 packed elements from a to the upper elements of the intrinsic result.

_mm_fmadd_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm_fmadd_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm_fmadd_sdx86-64 and fma

Multiplies the lower double-precision (64-bit) floating-point elements in a and b, and add the intermediate result to the lower element in c. Stores the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fmadd_ssx86-64 and fma

Multiplies the lower single-precision (32-bit) floating-point elements in a and b, and add the intermediate result to the lower element in c. Stores the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_fmaddsub_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm_fmaddsub_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm_fmsub_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm_fmsub_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm_fmsub_sdx86-64 and fma

Multiplies the lower double-precision (64-bit) floating-point elements in a and b, and subtract the lower element in c from the intermediate result. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fmsub_ssx86-64 and fma

Multiplies the lower single-precision (32-bit) floating-point elements in a and b, and subtract the lower element in c from the intermediate result. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_fmsubadd_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm_fmsubadd_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm_fnmadd_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm_fnmadd_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm_fnmadd_sdx86-64 and fma

Multiplies the lower double-precision (64-bit) floating-point elements in a and b, and add the negated intermediate result to the lower element in c. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fnmadd_ssx86-64 and fma

Multiplies the lower single-precision (32-bit) floating-point elements in a and b, and add the negated intermediate result to the lower element in c. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_fnmsub_pdx86-64 and fma

Multiplies packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm_fnmsub_psx86-64 and fma

Multiplies packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm_fnmsub_sdx86-64 and fma

Multiplies the lower double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fnmsub_ssx86-64 and fma

Multiplies the lower single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_getcsrx86-64 and sse

Gets the unsigned 32-bit value of the MXCSR control and status register.

_mm_hadd_epi16x86-64 and ssse3

Horizontally adds the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16].

_mm_hadd_epi32x86-64 and ssse3

Horizontally adds the adjacent pairs of values contained in 2 packed 128-bit vectors of [4 x i32].

_mm_hadd_pdx86-64 and sse3

Horizontally adds adjacent pairs of double-precision (64-bit) floating-point elements in a and b, and pack the results.

_mm_hadd_psx86-64 and sse3

Horizontally adds adjacent pairs of single-precision (32-bit) floating-point elements in a and b, and pack the results.

_mm_hadds_epi16x86-64 and ssse3

Horizontally adds the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16]. Positive sums greater than 7FFFh are saturated to 7FFFh. Negative sums less than 8000h are saturated to 8000h.

_mm_hsub_epi16x86-64 and ssse3

Horizontally subtract the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16].

_mm_hsub_epi32x86-64 and ssse3

Horizontally subtract the adjacent pairs of values contained in 2 packed 128-bit vectors of [4 x i32].

_mm_hsub_pdx86-64 and sse3

Horizontally subtract adjacent pairs of double-precision (64-bit) floating-point elements in a and b, and pack the results.

_mm_hsub_psx86-64 and sse3

Horizontally adds adjacent pairs of single-precision (32-bit) floating-point elements in a and b, and pack the results.

_mm_hsubs_epi16x86-64 and ssse3

Horizontally subtract the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16]. Positive differences greater than 7FFFh are saturated to 7FFFh. Negative differences less than 8000h are saturated to 8000h.

_mm_i32gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i32gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i32gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i32gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_insert_epi8x86-64 and sse4.1

Returns a copy of a with the 8-bit integer from i inserted at a location specified by imm8.

_mm_insert_epi16x86-64 and sse2

Returns a new vector where the imm8 element of a is replaced with i.

_mm_insert_epi32x86-64 and sse4.1

Returns a copy of a with the 32-bit integer from i inserted at a location specified by imm8.

_mm_insert_epi64x86-64 and sse4.1

Returns a copy of a with the 64-bit integer from i inserted at a location specified by imm8.

_mm_insert_psx86-64 and sse4.1

Select a single value in a to store at some position in b, Then zero elements according to imm8.

_mm_insert_si64x86-64 and sse4a

Inserts the [length:0] bits of y into x at index.

_mm_lddqu_si128x86-64 and sse3

Loads 128-bits of integer data from unaligned memory. This intrinsic may perform better than _mm_loadu_si128 when the data crosses a cache line boundary.

_mm_lfencex86-64 and sse2

Performs a serializing operation on all load-from-memory instructions that were issued prior to this instruction.

_mm_load1_pdx86-64 and sse2

Loads a double-precision (64-bit) floating-point element from memory into both elements of returned vector.

_mm_load1_psx86-64 and sse

Construct a __m128 by duplicating the value read from p into all elements.

_mm_load_pdx86-64 and sse2

Loads 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from memory into the returned vector. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_load_pd1x86-64 and sse2

Loads a double-precision (64-bit) floating-point element from memory into both elements of returned vector.

_mm_load_psx86-64 and sse

Loads four f32 values from aligned memory into a __m128. If the pointer is not aligned to a 128-bit boundary (16 bytes) a general protection fault will be triggered (fatal program crash).

_mm_load_ps1x86-64 and sse

Alias for _mm_load1_ps

_mm_load_sdx86-64 and sse2

Loads a 64-bit double-precision value to the low element of a 128-bit integer vector and clears the upper element.

_mm_load_si128x86-64 and sse2

Loads 128-bits of integer data from memory into a new vector.

_mm_load_ssx86-64 and sse

Construct a __m128 with the lowest element read from p and the other elements set to zero.

_mm_loaddup_pdx86-64 and sse3

Loads a double-precision (64-bit) floating-point element from memory into both elements of return vector.

_mm_loadh_pdx86-64 and sse2

Loads a double-precision value into the high-order bits of a 128-bit vector of [2 x double]. The low-order bits are copied from the low-order bits of the first operand.

_mm_loadl_epi64x86-64 and sse2

Loads 64-bit integer from memory into first element of returned vector.

_mm_loadl_pdx86-64 and sse2

Loads a double-precision value into the low-order bits of a 128-bit vector of [2 x double]. The high-order bits are copied from the high-order bits of the first operand.

_mm_loadr_pdx86-64 and sse2

Loads 2 double-precision (64-bit) floating-point elements from memory into the returned vector in reverse order. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_loadr_psx86-64 and sse

Loads four f32 values from aligned memory into a __m128 in reverse order.

_mm_loadu_pdx86-64 and sse2

Loads 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from memory into the returned vector. mem_addr does not need to be aligned on any particular boundary.

_mm_loadu_psx86-64 and sse

Loads four f32 values from memory into a __m128. There are no restrictions on memory alignment. For aligned memory _mm_load_ps may be faster.

_mm_loadu_si64x86-64 and sse

Loads unaligned 64-bits of integer data from memory into new vector.

_mm_loadu_si128x86-64 and sse2

Loads 128-bits of integer data from memory into a new vector.

_mm_madd_epi16x86-64 and sse2

Multiplies and then horizontally add signed 16 bit integers in a and b.

_mm_maddubs_epi16x86-64 and ssse3

Multiplies corresponding pairs of packed 8-bit unsigned integer values contained in the first source operand and packed 8-bit signed integer values contained in the second source operand, add pairs of contiguous products with signed saturation, and writes the 16-bit sums to the corresponding bits in the destination.

_mm_mask_i32gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i32gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i32gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i32gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_epi32x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_epi64x86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_pdx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_psx86-64 and avx2

Returns values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_maskload_epi32x86-64 and avx2

Loads packed 32-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm_maskload_epi64x86-64 and avx2

Loads packed 64-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm_maskload_pdx86-64 and avx

Loads packed double-precision (64-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm_maskload_psx86-64 and avx

Loads packed single-precision (32-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm_maskmoveu_si128x86-64 and sse2

Conditionally store 8-bit integer elements from a into memory using mask.

_mm_maskstore_epi32x86-64 and avx2

Stores packed 32-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm_maskstore_epi64x86-64 and avx2

Stores packed 64-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm_maskstore_pdx86-64 and avx

Stores packed double-precision (64-bit) floating-point elements from a into memory using mask.

_mm_maskstore_psx86-64 and avx

Stores packed single-precision (32-bit) floating-point elements from a into memory using mask.

_mm_max_epi8x86-64 and sse4.1

Compares packed 8-bit integers in a and b and returns packed maximum values in dst.

_mm_max_epi16x86-64 and sse2

Compares packed 16-bit integers in a and b, and returns the packed maximum values.

_mm_max_epi32x86-64 and sse4.1

Compares packed 32-bit integers in a and b, and returns packed maximum values.

_mm_max_epu8x86-64 and sse2

Compares packed unsigned 8-bit integers in a and b, and returns the packed maximum values.

_mm_max_epu16x86-64 and sse4.1

Compares packed unsigned 16-bit integers in a and b, and returns packed maximum.

_mm_max_epu32x86-64 and sse4.1

Compares packed unsigned 32-bit integers in a and b, and returns packed maximum values.

_mm_max_pdx86-64 and sse2

Returns a new vector with the maximum values from corresponding elements in a and b.

_mm_max_psx86-64 and sse

Compares packed single-precision (32-bit) floating-point elements in a and b, and return the corresponding maximum values.

_mm_max_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the maximum of the lower elements of a and b.

_mm_max_ssx86-64 and sse

Compares the first single-precision (32-bit) floating-point element of a and b, and return the maximum value in the first element of the return value, the other elements are copied from a.

_mm_mfencex86-64 and sse2

Performs a serializing operation on all load-from-memory and store-to-memory instructions that were issued prior to this instruction.

_mm_min_epi8x86-64 and sse4.1

Compares packed 8-bit integers in a and b and returns packed minimum values in dst.

_mm_min_epi16x86-64 and sse2

Compares packed 16-bit integers in a and b, and returns the packed minimum values.

_mm_min_epi32x86-64 and sse4.1

Compares packed 32-bit integers in a and b, and returns packed minimum values.

_mm_min_epu8x86-64 and sse2

Compares packed unsigned 8-bit integers in a and b, and returns the packed minimum values.

_mm_min_epu16x86-64 and sse4.1

Compares packed unsigned 16-bit integers in a and b, and returns packed minimum.

_mm_min_epu32x86-64 and sse4.1

Compares packed unsigned 32-bit integers in a and b, and returns packed minimum values.

_mm_min_pdx86-64 and sse2

Returns a new vector with the minimum values from corresponding elements in a and b.

_mm_min_psx86-64 and sse

Compares packed single-precision (32-bit) floating-point elements in a and b, and return the corresponding minimum values.

_mm_min_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the minimum of the lower elements of a and b.

_mm_min_ssx86-64 and sse

Compares the first single-precision (32-bit) floating-point element of a and b, and return the minimum value in the first element of the return value, the other elements are copied from a.

_mm_minpos_epu16x86-64 and sse4.1

Finds the minimum unsigned 16-bit element in the 128-bit __m128i vector, returning a vector containing its value in its first position, and its index in its second position; all other elements are set to zero.

_mm_move_epi64x86-64 and sse2

Returns a vector where the low element is extracted from a and its upper element is zero.

_mm_move_sdx86-64 and sse2

Constructs a 128-bit floating-point vector of [2 x double]. The lower 64 bits are set to the lower 64 bits of the second parameter. The upper 64 bits are set to the upper 64 bits of the first parameter.

_mm_move_ssx86-64 and sse

Returns a __m128 with the first component from b and the remaining components from a.

_mm_movedup_pdx86-64 and sse3

Duplicate the low double-precision (64-bit) floating-point element from a.

_mm_movehdup_psx86-64 and sse3

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a.

_mm_movehl_psx86-64 and sse

Combine higher half of a and b. The highwe half of b occupies the lower half of result.

_mm_moveldup_psx86-64 and sse3

Duplicate even-indexed single-precision (32-bit) floating-point elements from a.

_mm_movelh_psx86-64 and sse

Combine lower half of a and b. The lower half of b occupies the higher half of result.

_mm_movemask_epi8x86-64 and sse2

Returns a mask of the most significant bit of each element in a.

_mm_movemask_pdx86-64 and sse2

Returns a mask of the most significant bit of each element in a.

_mm_movemask_psx86-64 and sse

Returns a mask of the most significant bit of each element in a.

_mm_mpsadbw_epu8x86-64 and sse4.1

Subtracts 8-bit unsigned integer values and computes the absolute values of the differences to the corresponding bits in the destination. Then sums of the absolute differences are returned according to the bit fields in the immediate operand.

_mm_mul_epi32x86-64 and sse4.1

Multiplies the low 32-bit integers from each packed 64-bit element in a and b, and returns the signed 64-bit result.

_mm_mul_epu32x86-64 and sse2

Multiplies the low unsigned 32-bit integers from each packed 64-bit element in a and b.

_mm_mul_pdx86-64 and sse2

Multiplies packed double-precision (64-bit) floating-point elements in a and b.

_mm_mul_psx86-64 and sse

Multiplies __m128 vectors.

_mm_mul_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by multiplying the low elements of a and b.

_mm_mul_ssx86-64 and sse

Multiplies the first component of a and b, the other components are copied from a.

_mm_mulhi_epi16x86-64 and sse2

Multiplies the packed 16-bit integers in a and b.

_mm_mulhi_epu16x86-64 and sse2

Multiplies the packed unsigned 16-bit integers in a and b.

_mm_mulhrs_epi16x86-64 and ssse3

Multiplies packed 16-bit signed integer values, truncate the 32-bit product to the 18 most significant bits by right-shifting, round the truncated value by adding 1, and write bits [16:1] to the destination.

_mm_mullo_epi16x86-64 and sse2

Multiplies the packed 16-bit integers in a and b.

_mm_mullo_epi32x86-64 and sse4.1

Multiplies the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and returns the lowest 32-bit, whatever they might be, reinterpreted as a signed integer. While pmulld __m128i::splat(2), __m128i::splat(2) returns the obvious __m128i::splat(4), due to wrapping arithmetic pmulld __m128i::splat(i32::MAX), __m128i::splat(2) would return a negative number.

_mm_or_pdx86-64 and sse2

Computes the bitwise OR of a and b.

_mm_or_psx86-64 and sse

Bitwise OR of packed single-precision (32-bit) floating-point elements.

_mm_or_si128x86-64 and sse2

Computes the bitwise OR of 128 bits (representing integer data) in a and b.

_mm_packs_epi16x86-64 and sse2

Converts packed 16-bit integers from a and b to packed 8-bit integers using signed saturation.

_mm_packs_epi32x86-64 and sse2

Converts packed 32-bit integers from a and b to packed 16-bit integers using signed saturation.

_mm_packus_epi16x86-64 and sse2

Converts packed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation.

_mm_packus_epi32x86-64 and sse4.1

Converts packed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation

_mm_pausex86-64

Provides a hint to the processor that the code sequence is a spin-wait loop.

_mm_permute_pdx86-64 and avx,sse2

Shuffles double-precision (64-bit) floating-point elements in a using the control in imm8.

_mm_permute_psx86-64 and avx,sse

Shuffles single-precision (32-bit) floating-point elements in a using the control in imm8.

_mm_permutevar_pdx86-64 and avx

Shuffles double-precision (64-bit) floating-point elements in a using the control in b.

_mm_permutevar_psx86-64 and avx

Shuffles single-precision (32-bit) floating-point elements in a using the control in b.

_mm_prefetchx86-64 and sse

Fetch the cache line that contains address p using the given strategy.

_mm_rcp_psx86-64 and sse

Returns the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a.

_mm_rcp_ssx86-64 and sse

Returns the approximate reciprocal of the first single-precision (32-bit) floating-point element in a, the other elements are unchanged.

_mm_round_pdx86-64 and sse4.1

Round the packed double-precision (64-bit) floating-point elements in a using the rounding parameter, and stores the results as packed double-precision floating-point elements. Rounding is done according to the rounding parameter, which can be one of:

_mm_round_psx86-64 and sse4.1

Round the packed single-precision (32-bit) floating-point elements in a using the rounding parameter, and stores the results as packed single-precision floating-point elements. Rounding is done according to the rounding parameter, which can be one of:

_mm_round_sdx86-64 and sse4.1

Round the lower double-precision (64-bit) floating-point element in b using the rounding parameter, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copies the upper element from a to the upper element of the intrinsic result. Rounding is done according to the rounding parameter, which can be one of:

_mm_round_ssx86-64 and sse4.1

Round the lower single-precision (32-bit) floating-point element in b using the rounding parameter, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copies the upper 3 packed elements from a to the upper elements of the instrinsic result. Rounding is done according to the rounding parameter, which can be one of:

_mm_rsqrt_psx86-64 and sse

Returns the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a.

_mm_rsqrt_ssx86-64 and sse

Returns the approximate reciprocal square root of the fist single-precision (32-bit) floating-point elements in a, the other elements are unchanged.

_mm_sad_epu8x86-64 and sse2

Sum the absolute differences of packed unsigned 8-bit integers.

_mm_set1_epi8x86-64 and sse2

Broadcasts 8-bit integer a to all elements.

_mm_set1_epi16x86-64 and sse2

Broadcasts 16-bit integer a to all elements.

_mm_set1_epi32x86-64 and sse2

Broadcasts 32-bit integer a to all elements.

_mm_set1_epi64xx86-64 and sse2

Broadcasts 64-bit integer a to all elements.

_mm_set1_pdx86-64 and sse2

Broadcasts double-precision (64-bit) floating-point value a to all elements of the return value.

_mm_set1_psx86-64 and sse

Construct a __m128 with all element set to a.

_mm_set_epi8x86-64 and sse2

Sets packed 8-bit integers with the supplied values.

_mm_set_epi16x86-64 and sse2

Sets packed 16-bit integers with the supplied values.

_mm_set_epi32x86-64 and sse2

Sets packed 32-bit integers with the supplied values.

_mm_set_epi64xx86-64 and sse2

Sets packed 64-bit integers with the supplied values, from highest to lowest.

_mm_set_pdx86-64 and sse2

Sets packed double-precision (64-bit) floating-point elements in the return value with the supplied values.

_mm_set_pd1x86-64 and sse2

Broadcasts double-precision (64-bit) floating-point value a to all elements of the return value.

_mm_set_psx86-64 and sse

Construct a __m128 from four floating point values highest to lowest.

_mm_set_ps1x86-64 and sse

Alias for _mm_set1_ps

_mm_set_sdx86-64 and sse2

Copies double-precision (64-bit) floating-point element a to the lower element of the packed 64-bit return value.

_mm_set_ssx86-64 and sse

Construct a __m128 with the lowest element set to a and the rest set to zero.

_mm_setcsrx86-64 and sse

Sets the MXCSR register with the 32-bit unsigned integer value.

_mm_setr_epi8x86-64 and sse2

Sets packed 8-bit integers with the supplied values in reverse order.

_mm_setr_epi16x86-64 and sse2

Sets packed 16-bit integers with the supplied values in reverse order.

_mm_setr_epi32x86-64 and sse2

Sets packed 32-bit integers with the supplied values in reverse order.

_mm_setr_pdx86-64 and sse2

Sets packed double-precision (64-bit) floating-point elements in the return value with the supplied values in reverse order.

_mm_setr_psx86-64 and sse

Construct a __m128 from four floating point values lowest to highest.

_mm_setzero_pdx86-64 and sse2

Returns packed double-precision (64-bit) floating-point elements with all zeros.

_mm_setzero_psx86-64 and sse

Construct a __m128 with all elements initialized to zero.

_mm_setzero_si128x86-64 and sse2

Returns a vector with all elements set to zero.

_mm_sfencex86-64 and sse

Performs a serializing operation on all store-to-memory instructions that were issued prior to this instruction.

_mm_sha1msg1_epu32x86-64 and sha

Performs an intermediate calculation for the next four SHA1 message values (unsigned 32-bit integers) using previous message values from a and b, and returning the result.

_mm_sha1msg2_epu32x86-64 and sha

Performs the final calculation for the next four SHA1 message values (unsigned 32-bit integers) using the intermediate result in a and the previous message values in b, and returns the result.

_mm_sha1nexte_epu32x86-64 and sha

Calculate SHA1 state variable E after four rounds of operation from the current SHA1 state variable a, add that value to the scheduled values (unsigned 32-bit integers) in b, and returns the result.

_mm_sha1rnds4_epu32x86-64 and sha

Performs four rounds of SHA1 operation using an initial SHA1 state (A,B,C,D) from a and some pre-computed sum of the next 4 round message values (unsigned 32-bit integers), and state variable E from b, and return the updated SHA1 state (A,B,C,D). func contains the logic functions and round constants.

_mm_sha256msg1_epu32x86-64 and sha

Performs an intermediate calculation for the next four SHA256 message values (unsigned 32-bit integers) using previous message values from a and b, and return the result.

_mm_sha256msg2_epu32x86-64 and sha

Performs the final calculation for the next four SHA256 message values (unsigned 32-bit integers) using previous message values from a and b, and return the result.

_mm_sha256rnds2_epu32x86-64 and sha

Performs 2 rounds of SHA256 operation using an initial SHA256 state (C,D,G,H) from a, an initial SHA256 state (A,B,E,F) from b, and a pre-computed sum of the next 2 round message values (unsigned 32-bit integers) and the corresponding round constants from k, and store the updated SHA256 state (A,B,E,F) in dst.

_mm_shuffle_epi8x86-64 and ssse3

Shuffles bytes from a according to the content of b.

_mm_shuffle_epi32x86-64 and sse2

Shuffles 32-bit integers in a using the control in imm8.

_mm_shuffle_pdx86-64 and sse2

Constructs a 128-bit floating-point vector of [2 x double] from two 128-bit vector parameters of [2 x double], using the immediate-value parameter as a specifier.

_mm_shuffle_psx86-64 and sse

Shuffles packed single-precision (32-bit) floating-point elements in a and b using mask.

_mm_shufflehi_epi16x86-64 and sse2

Shuffles 16-bit integers in the high 64 bits of a using the control in imm8.

_mm_shufflelo_epi16x86-64 and sse2

Shuffles 16-bit integers in the low 64 bits of a using the control in imm8.

_mm_sign_epi8x86-64 and ssse3

Negates packed 8-bit integers in a when the corresponding signed 8-bit integer in b is negative, and returns the result. Elements in result are zeroed out when the corresponding element in b is zero.

_mm_sign_epi16x86-64 and ssse3

Negates packed 16-bit integers in a when the corresponding signed 16-bit integer in b is negative, and returns the results. Elements in result are zeroed out when the corresponding element in b is zero.

_mm_sign_epi32x86-64 and ssse3

Negates packed 32-bit integers in a when the corresponding signed 32-bit integer in b is negative, and returns the results. Element in result are zeroed out when the corresponding element in b is zero.

_mm_sll_epi16x86-64 and sse2

Shifts packed 16-bit integers in a left by count while shifting in zeros.

_mm_sll_epi32x86-64 and sse2

Shifts packed 32-bit integers in a left by count while shifting in zeros.

_mm_sll_epi64x86-64 and sse2

Shifts packed 64-bit integers in a left by count while shifting in zeros.

_mm_slli_epi16x86-64 and sse2

Shifts packed 16-bit integers in a left by imm8 while shifting in zeros.

_mm_slli_epi32x86-64 and sse2

Shifts packed 32-bit integers in a left by imm8 while shifting in zeros.

_mm_slli_epi64x86-64 and sse2

Shifts packed 64-bit integers in a left by imm8 while shifting in zeros.

_mm_slli_si128x86-64 and sse2

Shifts a left by imm8 bytes while shifting in zeros.

_mm_sllv_epi32x86-64 and avx2

Shifts packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and returns the result.

_mm_sllv_epi64x86-64 and avx2

Shifts packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and returns the result.

_mm_sqrt_pdx86-64 and sse2

Returns a new vector with the square root of each of the values in a.

_mm_sqrt_psx86-64 and sse

Returns the square root of packed single-precision (32-bit) floating-point elements in a.

_mm_sqrt_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by the square root of the lower element b.

_mm_sqrt_ssx86-64 and sse

Returns the square root of the first single-precision (32-bit) floating-point element in a, the other elements are unchanged.

_mm_sra_epi16x86-64 and sse2

Shifts packed 16-bit integers in a right by count while shifting in sign bits.

_mm_sra_epi32x86-64 and sse2

Shifts packed 32-bit integers in a right by count while shifting in sign bits.

_mm_srai_epi16x86-64 and sse2

Shifts packed 16-bit integers in a right by imm8 while shifting in sign bits.

_mm_srai_epi32x86-64 and sse2

Shifts packed 32-bit integers in a right by imm8 while shifting in sign bits.

_mm_srav_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits.

_mm_srl_epi16x86-64 and sse2

Shifts packed 16-bit integers in a right by count while shifting in zeros.

_mm_srl_epi32x86-64 and sse2

Shifts packed 32-bit integers in a right by count while shifting in zeros.

_mm_srl_epi64x86-64 and sse2

Shifts packed 64-bit integers in a right by count while shifting in zeros.

_mm_srli_epi16x86-64 and sse2

Shifts packed 16-bit integers in a right by imm8 while shifting in zeros.

_mm_srli_epi32x86-64 and sse2

Shifts packed 32-bit integers in a right by imm8 while shifting in zeros.

_mm_srli_epi64x86-64 and sse2

Shifts packed 64-bit integers in a right by imm8 while shifting in zeros.

_mm_srli_si128x86-64 and sse2

Shifts a right by imm8 bytes while shifting in zeros.

_mm_srlv_epi32x86-64 and avx2

Shifts packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm_srlv_epi64x86-64 and avx2

Shifts packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm_store1_pdx86-64 and sse2

Stores the lower double-precision (64-bit) floating-point element from a into 2 contiguous elements in memory. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_store1_psx86-64 and sse

Stores the lowest 32 bit float of a repeated four times into aligned memory.

_mm_store_pdx86-64 and sse2

Stores 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_store_pd1x86-64 and sse2

Stores the lower double-precision (64-bit) floating-point element from a into 2 contiguous elements in memory. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_store_psx86-64 and sse

Stores four 32-bit floats into aligned memory.

_mm_store_ps1x86-64 and sse

Alias for _mm_store1_ps

_mm_store_sdx86-64 and sse2

Stores the lower 64 bits of a 128-bit vector of [2 x double] to a memory location.

_mm_store_si128x86-64 and sse2

Stores 128-bits of integer data from a into memory.

_mm_store_ssx86-64 and sse

Stores the lowest 32 bit float of a into memory.

_mm_storeh_pdx86-64 and sse2

Stores the upper 64 bits of a 128-bit vector of [2 x double] to a memory location.

_mm_storel_epi64x86-64 and sse2

Stores the lower 64-bit integer a to a memory location.

_mm_storel_pdx86-64 and sse2

Stores the lower 64 bits of a 128-bit vector of [2 x double] to a memory location.

_mm_storer_pdx86-64 and sse2

Stores 2 double-precision (64-bit) floating-point elements from a into memory in reverse order. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_storer_psx86-64 and sse

Stores four 32-bit floats into aligned memory in reverse order.

_mm_storeu_pdx86-64 and sse2

Stores 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm_storeu_psx86-64 and sse

Stores four 32-bit floats into memory. There are no restrictions on memory alignment. For aligned memory _mm_store_ps may be faster.

_mm_storeu_si128x86-64 and sse2

Stores 128-bits of integer data from a into memory.

_mm_stream_pdx86-64 and sse2

Stores a 128-bit floating point vector of [2 x double] to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_psx86-64 and sse

Stores a into the memory at mem_addr using a non-temporal memory hint.

_mm_stream_sdx86-64 and sse4a

Non-temporal store of a.0 into p.

_mm_stream_si32x86-64 and sse2

Stores a 32-bit integer value in the specified memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_si64x86-64 and sse2

Stores a 64-bit integer value in the specified memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_si128x86-64 and sse2

Stores a 128-bit integer vector to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_ssx86-64 and sse4a

Non-temporal store of a.0 into p.

_mm_sub_epi8x86-64 and sse2

Subtracts packed 8-bit integers in b from packed 8-bit integers in a.

_mm_sub_epi16x86-64 and sse2

Subtracts packed 16-bit integers in b from packed 16-bit integers in a.

_mm_sub_epi32x86-64 and sse2

Subtract packed 32-bit integers in b from packed 32-bit integers in a.

_mm_sub_epi64x86-64 and sse2

Subtract packed 64-bit integers in b from packed 64-bit integers in a.

_mm_sub_pdx86-64 and sse2

Subtract packed double-precision (64-bit) floating-point elements in b from a.

_mm_sub_psx86-64 and sse

Subtracts __m128 vectors.

_mm_sub_sdx86-64 and sse2

Returns a new vector with the low element of a replaced by subtracting the low element by b from the low element of a.

_mm_sub_ssx86-64 and sse

Subtracts the first component of b from a, the other components are copied from a.

_mm_subs_epi8x86-64 and sse2

Subtract packed 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm_subs_epi16x86-64 and sse2

Subtract packed 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm_subs_epu8x86-64 and sse2

Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit integers in a using saturation.

_mm_subs_epu16x86-64 and sse2

Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit integers in a using saturation.

_mm_test_all_onesx86-64 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all ones.

_mm_test_all_zerosx86-64 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all zeros.

_mm_test_mix_ones_zerosx86-64 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones.

_mm_testc_pdx86-64 and avx

Computes the bitwise AND of 128 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm_testc_psx86-64 and avx

Computes the bitwise AND of 128 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm_testc_si128x86-64 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all ones.

_mm_testnzc_pdx86-64 and avx

Computes the bitwise AND of 128 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm_testnzc_psx86-64 and avx

Computes the bitwise AND of 128 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm_testnzc_si128x86-64 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones.

_mm_testz_pdx86-64 and avx

Computes the bitwise AND of 128 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm_testz_psx86-64 and avx

Computes the bitwise AND of 128 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm_testz_si128x86-64 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all zeros.

_mm_tzcnt_32x86-64 and bmi1

Counts the number of trailing least significant zero bits.

_mm_tzcnt_64x86-64 and bmi1

Counts the number of trailing least significant zero bits.

_mm_ucomieq_sdx86-64 and sse2

Compares the lower element of a and b for equality.

_mm_ucomieq_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if they are equal, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomige_sdx86-64 and sse2

Compares the lower element of a and b for greater-than-or-equal.

_mm_ucomige_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than or equal to the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomigt_sdx86-64 and sse2

Compares the lower element of a and b for greater-than.

_mm_ucomigt_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomile_sdx86-64 and sse2

Compares the lower element of a and b for less-than-or-equal.

_mm_ucomile_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than or equal to the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomilt_sdx86-64 and sse2

Compares the lower element of a and b for less-than.

_mm_ucomilt_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomineq_sdx86-64 and sse2

Compares the lower element of a and b for not-equal.

_mm_ucomineq_ssx86-64 and sse

Compares two 32-bit floats from the low-order bits of a and b. Returns 1 if they are not equal, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_undefined_pdx86-64 and sse2

Returns vector of type __m128d with undefined elements.

_mm_undefined_psx86-64 and sse

Returns vector of type __m128 with undefined elements.

_mm_undefined_si128x86-64 and sse2

Returns vector of type __m128i with undefined elements.

_mm_unpackhi_epi8x86-64 and sse2

Unpacks and interleave 8-bit integers from the high half of a and b.

_mm_unpackhi_epi16x86-64 and sse2

Unpacks and interleave 16-bit integers from the high half of a and b.

_mm_unpackhi_epi32x86-64 and sse2

Unpacks and interleave 32-bit integers from the high half of a and b.

_mm_unpackhi_epi64x86-64 and sse2

Unpacks and interleave 64-bit integers from the high half of a and b.

_mm_unpackhi_pdx86-64 and sse2

The resulting __m128d element is composed by the low-order values of the two __m128d interleaved input elements, i.e.:

_mm_unpackhi_psx86-64 and sse

Unpacks and interleave single-precision (32-bit) floating-point elements from the higher half of a and b.

_mm_unpacklo_epi8x86-64 and sse2

Unpacks and interleave 8-bit integers from the low half of a and b.

_mm_unpacklo_epi16x86-64 and sse2

Unpacks and interleave 16-bit integers from the low half of a and b.

_mm_unpacklo_epi32x86-64 and sse2

Unpacks and interleave 32-bit integers from the low half of a and b.

_mm_unpacklo_epi64x86-64 and sse2

Unpacks and interleave 64-bit integers from the low half of a and b.

_mm_unpacklo_pdx86-64 and sse2

The resulting __m128d element is composed by the high-order values of the two __m128d interleaved input elements, i.e.:

_mm_unpacklo_psx86-64 and sse

Unpacks and interleave single-precision (32-bit) floating-point elements from the lower half of a and b.

_mm_xor_pdx86-64 and sse2

Computes the bitwise OR of a and b.

_mm_xor_psx86-64 and sse

Bitwise exclusive OR of packed single-precision (32-bit) floating-point elements.

_mm_xor_si128x86-64 and sse2

Computes the bitwise XOR of 128 bits (representing integer data) in a and b.

_mulx_u32x86-64 and bmi2

Unsigned multiply without affecting flags.

_mulx_u64x86-64 and bmi2

Unsigned multiply without affecting flags.

_pdep_u32x86-64 and bmi2

Scatter contiguous low order bits of a to the result at the positions specified by the mask.

_pdep_u64x86-64 and bmi2

Scatter contiguous low order bits of a to the result at the positions specified by the mask.

_pext_u32x86-64 and bmi2

Gathers the bits of x specified by the mask into the contiguous low order bit positions of the result.

_pext_u64x86-64 and bmi2

Gathers the bits of x specified by the mask into the contiguous low order bit positions of the result.

_popcnt32x86-64 and popcnt

Counts the bits that are set.

_popcnt64x86-64 and popcnt

Counts the bits that are set.

_rdrand16_stepx86-64 and rdrand

Read a hardware generated 16-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise.

_rdrand32_stepx86-64 and rdrand

Read a hardware generated 32-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise.

_rdrand64_stepx86-64 and rdrand

Read a hardware generated 64-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise.

_rdseed16_stepx86-64 and rdseed

Read a 16-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.

_rdseed32_stepx86-64 and rdseed

Read a 32-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.

_rdseed64_stepx86-64 and rdseed

Read a 64-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.

_rdtscx86-64

Reads the current value of the processor’s time-stamp counter.

_subborrow_u32x86-64

Adds unsigned 32-bit integers a and b with unsigned 8-bit carry-in c_in (carry or overflow flag), and store the unsigned 32-bit result in out, and the carry-out is returned (carry or overflow flag).

_subborrow_u64x86-64

Adds unsigned 64-bit integers a and b with unsigned 8-bit carry-in c_in. (carry or overflow flag), and store the unsigned 64-bit result in out, and the carry-out is returned (carry or overflow flag).

_t1mskc_u32x86-64 and tbm

Clears all bits below the least significant zero of x and sets all other bits.

_t1mskc_u64x86-64 and tbm

Clears all bits below the least significant zero of x and sets all other bits.

_tzcnt_u32x86-64 and bmi1

Counts the number of trailing least significant zero bits.

_tzcnt_u64x86-64 and bmi1

Counts the number of trailing least significant zero bits.

_tzmsk_u32x86-64 and tbm

Sets all bits below the least significant one of x and clears all other bits.

_tzmsk_u64x86-64 and tbm

Sets all bits below the least significant one of x and clears all other bits.

_xgetbvx86-64 and xsave

Reads the contents of the extended control register XCR specified in xcr_no.

_xrstorx86-64 and xsave

Performs a full or partial restore of the enabled processor states using the state information stored in memory at mem_addr.

_xrstor64x86-64 and xsave

Performs a full or partial restore of the enabled processor states using the state information stored in memory at mem_addr.

_xrstorsx86-64 and xsave,xsaves

Performs a full or partial restore of the enabled processor states using the state information stored in memory at mem_addr.

_xrstors64x86-64 and xsave,xsaves

Performs a full or partial restore of the enabled processor states using the state information stored in memory at mem_addr.

_xsavex86-64 and xsave

Performs a full or partial save of the enabled processor states to memory at mem_addr.

_xsave64x86-64 and xsave

Performs a full or partial save of the enabled processor states to memory at mem_addr.

_xsavecx86-64 and xsave,xsavec

Performs a full or partial save of the enabled processor states to memory at mem_addr.

_xsavec64x86-64 and xsave,xsavec

Performs a full or partial save of the enabled processor states to memory at mem_addr.

_xsaveoptx86-64 and xsave,xsaveopt

Performs a full or partial save of the enabled processor states to memory at mem_addr.

_xsaveopt64x86-64 and xsave,xsaveopt

Performs a full or partial save of the enabled processor states to memory at mem_addr.

_xsavesx86-64 and xsave,xsaves

Performs a full or partial save of the enabled processor states to memory at mem_addr

_xsaves64x86-64 and xsave,xsaves

Performs a full or partial save of the enabled processor states to memory at mem_addr

_xsetbvx86-64 and xsave

Copies 64-bits from val to the extended control register (XCR) specified by a.

_MM_SHUFFLEExperimentalx86-64

A utility function for creating masks to use with Intel shuffle and permute intrinsics.

_bittestExperimentalx86-64

Returns the bit in position b of the memory addressed by p.

_bittest64Experimentalx86-64

Returns the bit in position b of the memory addressed by p.

_bittestandcomplementExperimentalx86-64

Returns the bit in position b of the memory addressed by p, then inverts that bit.

_bittestandcomplement64Experimentalx86-64

Returns the bit in position b of the memory addressed by p, then inverts that bit.

_bittestandresetExperimentalx86-64

Returns the bit in position b of the memory addressed by p, then resets that bit to 0.

_bittestandreset64Experimentalx86-64

Returns the bit in position b of the memory addressed by p, then resets that bit to 0.

_bittestandsetExperimentalx86-64

Returns the bit in position b of the memory addressed by p, then sets the bit to 1.

_bittestandset64Experimentalx86-64

Returns the bit in position b of the memory addressed by p, then sets the bit to 1.

_kand_mask16Experimentalx86-64 and avx512f

Compute the bitwise AND of 16-bit masks a and b, and store the result in k.

_kandn_mask16Experimentalx86-64 and avx512f

Compute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k.

_knot_mask16Experimentalx86-64 and avx512f

Compute the bitwise NOT of 16-bit mask a, and store the result in k.

_kor_mask16Experimentalx86-64 and avx512f

Compute the bitwise OR of 16-bit masks a and b, and store the result in k.

_kxnor_mask16Experimentalx86-64 and avx512f

Compute the bitwise XNOR of 16-bit masks a and b, and store the result in k.

_kxor_mask16Experimentalx86-64 and avx512f

Compute the bitwise XOR of 16-bit masks a and b, and store the result in k.

_mm256_cvtph_psExperimentalx86-64 and f16c

Converts the 8 x 16-bit half-precision float values in the 128-bit vector a into 8 x 32-bit float values stored in a 256-bit wide vector.

_mm256_cvtps_phExperimentalx86-64 and f16c

Converts the 8 x 32-bit float values in the 256-bit vector a into 8 x 16-bit half-precision float values stored in a 128-bit wide vector.

_mm256_madd52hi_epu64Experimentalx86-64 and avx512ifma,avx512vl

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm256_madd52lo_epu64Experimentalx86-64 and avx512ifma,avx512vl

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm512_abs_epi32Experimentalx86-64 and avx512f

Computes the absolute values of packed 32-bit integers in a.

_mm512_abs_epi64Experimentalx86-64 and avx512f

Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst.

_mm512_abs_pdExperimentalx86-64 and avx512f

Finds the absolute value of each packed double-precision (64-bit) floating-point element in v2, storing the results in dst.

_mm512_abs_psExperimentalx86-64 and avx512f

Finds the absolute value of each packed single-precision (32-bit) floating-point element in v2, storing the results in dst.

_mm512_add_epi32Experimentalx86-64 and avx512f

Add packed 32-bit integers in a and b, and store the results in dst.

_mm512_add_epi64Experimentalx86-64 and avx512f

Add packed 64-bit integers in a and b, and store the results in dst.

_mm512_add_pdExperimentalx86-64 and avx512f

Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst.

_mm512_add_psExperimentalx86-64 and avx512f

Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst.

_mm512_add_round_pdExperimentalx86-64 and avx512f

Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst.

_mm512_add_round_psExperimentalx86-64 and avx512f

Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst.

_mm512_and_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_and_epi64Experimentalx86-64 and avx512f

Compute the bitwise AND of 512 bits (composed of packed 64-bit integers) in a and b, and store the results in dst.

_mm512_and_si512Experimentalx86-64 and avx512f

Compute the bitwise AND of 512 bits (representing integer data) in a and b, and store the result in dst.

_mm512_cmp_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b based on the comparison operand specified by op.

_mm512_cmp_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b based on the comparison operand specified by op.

_mm512_cmp_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by op.

_mm512_cmp_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by op.

_mm512_cmp_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op.

_mm512_cmp_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op.

_mm512_cmp_round_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op.

_mm512_cmp_round_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op.

_mm512_cmpeq_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for equality, and store the results in a mask vector.

_mm512_cmpeq_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for equality, and store the results in a mask vector.

_mm512_cmpeq_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for equality, and store the results in a mask vector.

_mm512_cmpeq_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for equality, and store the results in a mask vector.

_mm512_cmpeq_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for equality, and store the results in a mask vector.

_mm512_cmpeq_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for equality, and store the results in a mask vector.

_mm512_cmpge_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector.

_mm512_cmpge_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector.

_mm512_cmpge_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector.

_mm512_cmpge_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector.

_mm512_cmpgt_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpgt_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpgt_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpgt_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in a mask vector.

_mm512_cmple_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector.

_mm512_cmple_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector.

_mm512_cmple_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector.

_mm512_cmple_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector.

_mm512_cmple_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector.

_mm512_cmple_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector.

_mm512_cmplt_epi32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector.

_mm512_cmplt_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for less-than, and store the results in a mask vector.

_mm512_cmplt_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector.

_mm512_cmplt_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in a mask vector.

_mm512_cmplt_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for less-than, and store the results in a mask vector.

_mm512_cmplt_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for less-than, and store the results in a mask vector.

_mm512_cmpneq_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for inequality, and store the results in a mask vector.

_mm512_cmpneq_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for inequality, and store the results in a mask vector.

_mm512_cmpneq_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for inequality, and store the results in a mask vector.

_mm512_cmpneq_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for inequality, and store the results in a mask vector.

_mm512_cmpneq_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for inequality, and store the results in a mask vector.

_mm512_cmpneq_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for inequality, and store the results in a mask vector.

_mm512_cmpnle_pd_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpnle_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpnlt_pd_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpnlt_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector.

_mm512_cmpord_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector.

_mm512_cmpord_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector.

_mm512_cmpunord_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector.

_mm512_cmpunord_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector.

_mm512_cvt_roundps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.

_mm512_cvt_roundps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.

_mm512_cvt_roundps_pdExperimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_cvtps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.

_mm512_cvtps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.

_mm512_cvtps_pdExperimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.

_mm512_cvtt_roundpd_epi32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_cvtt_roundpd_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_cvtt_roundps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_cvtt_roundps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_cvttpd_epi32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.

_mm512_cvttpd_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.

_mm512_cvttps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.

_mm512_cvttps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.

_mm512_div_pdExperimentalx86-64 and avx512f

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst.

_mm512_div_psExperimentalx86-64 and avx512f

Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst.

_mm512_div_round_pdExperimentalx86-64 and avx512f

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, =and store the results in dst.

_mm512_div_round_psExperimentalx86-64 and avx512f

Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst.

_mm512_extractf32x4_psExperimentalx86-64 and avx512f

Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the result in dst.

_mm512_fmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.

_mm512_fmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.

_mm512_fmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.

_mm512_fmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.

_mm512_fmaddsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.

_mm512_fmaddsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.

_mm512_fmaddsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.

_mm512_fmaddsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.

_mm512_fmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.

_mm512_fmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.

_mm512_fmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.

_mm512_fmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.

_mm512_fmsubadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.

_mm512_fmsubadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.

_mm512_fmsubadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.

_mm512_fmsubadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.

_mm512_fnmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.

_mm512_fnmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.

_mm512_fnmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.

_mm512_fnmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.

_mm512_fnmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.

_mm512_fnmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.

_mm512_fnmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.

_mm512_fnmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.

_mm512_getexp_pdExperimentalx86-64 and avx512f

Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.

_mm512_getexp_psExperimentalx86-64 and avx512f

Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.

_mm512_getexp_round_pdExperimentalx86-64 and avx512f

Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_getexp_round_psExperimentalx86-64 and avx512f

Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_getmant_pdExperimentalx86-64 and avx512f

Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1

_mm512_getmant_psExperimentalx86-64 and avx512f

Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1

_mm512_getmant_round_pdExperimentalx86-64 and avx512f

Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_getmant_round_psExperimentalx86-64 and avx512f

Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_i32gather_epi32Experimentalx86-64 and avx512f

Gather 32-bit integers from memory using 32-bit indices.

_mm512_i32gather_epi64Experimentalx86-64 and avx512f

Gather 64-bit integers from memory using 32-bit indices.

_mm512_i32gather_pdExperimentalx86-64 and avx512f

Gather double-precision (64-bit) floating-point elements from memory using 32-bit indices.

_mm512_i32gather_psExperimentalx86-64 and avx512f

Gather single-precision (32-bit) floating-point elements from memory using 32-bit indices.

_mm512_i32scatter_epi32Experimentalx86-64 and avx512f

Scatter 32-bit integers from src into memory using 32-bit indices.

_mm512_i32scatter_epi64Experimentalx86-64 and avx512f

Scatter 64-bit integers from src into memory using 32-bit indices.

_mm512_i32scatter_pdExperimentalx86-64 and avx512f

Scatter double-precision (64-bit) floating-point elements from memory using 32-bit indices.

_mm512_i32scatter_psExperimentalx86-64 and avx512f

Scatter single-precision (32-bit) floating-point elements from memory using 32-bit indices.

_mm512_i64gather_epi32Experimentalx86-64 and avx512f

Gather 32-bit integers from memory using 64-bit indices.

_mm512_i64gather_epi64Experimentalx86-64 and avx512f

Gather 64-bit integers from memory using 64-bit indices.

_mm512_i64gather_pdExperimentalx86-64 and avx512f

Gather double-precision (64-bit) floating-point elements from memory using 64-bit indices.

_mm512_i64gather_psExperimentalx86-64 and avx512f

Gather single-precision (32-bit) floating-point elements from memory using 64-bit indices.

_mm512_i64scatter_epi32Experimentalx86-64 and avx512f

Scatter 32-bit integers from src into memory using 64-bit indices.

_mm512_i64scatter_epi64Experimentalx86-64 and avx512f

Scatter 64-bit integers from src into memory using 64-bit indices.

_mm512_i64scatter_pdExperimentalx86-64 and avx512f

Scatter double-precision (64-bit) floating-point elements from src into memory using 64-bit indices.

_mm512_i64scatter_psExperimentalx86-64 and avx512f

Scatter single-precision (32-bit) floating-point elements from src into memory using 64-bit indices.

_mm512_kandExperimentalx86-64 and avx512f

Compute the bitwise AND of 16-bit masks a and b, and store the result in k.

_mm512_kandnExperimentalx86-64 and avx512f

Compute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k.

_mm512_kmovExperimentalx86-64 and avx512f

Copy 16-bit mask a to k.

_mm512_knotExperimentalx86-64 and avx512f

Compute the bitwise NOT of 16-bit mask a, and store the result in k.

_mm512_korExperimentalx86-64 and avx512f

Compute the bitwise OR of 16-bit masks a and b, and store the result in k.

_mm512_kxnorExperimentalx86-64 and avx512f

Compute the bitwise XNOR of 16-bit masks a and b, and store the result in k.

_mm512_kxorExperimentalx86-64 and avx512f

Compute the bitwise XOR of 16-bit masks a and b, and store the result in k.

_mm512_loadu_pdExperimentalx86-64 and avx512f

Loads 512-bits (composed of 8 packed double-precision (64-bit) floating-point elements) from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm512_loadu_psExperimentalx86-64 and avx512f

Loads 512-bits (composed of 16 packed single-precision (32-bit) floating-point elements) from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm512_madd52hi_epu64Experimentalx86-64 and avx512ifma

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm512_madd52lo_epu64Experimentalx86-64 and avx512ifma

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm512_mask2_permutex2var_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).

_mm512_mask2_permutex2var_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).

_mm512_mask2_permutex2var_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set)

_mm512_mask2_permutex2var_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).

_mm512_mask3_fmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmaddsub_pdExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmaddsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmaddsub_round_pdExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmaddsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsubadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsubadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsubadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fmsubadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask3_fnmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).

_mm512_mask_abs_epi32Experimentalx86-64 and avx512f

Computes the absolute value of packed 32-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_abs_epi64Experimentalx86-64 and avx512f

Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_abs_pdExperimentalx86-64 and avx512f

Finds the absolute value of each packed double-precision (64-bit) floating-point element in v2, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_abs_psExperimentalx86-64 and avx512f

Finds the absolute value of each packed single-precision (32-bit) floating-point element in v2, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_add_epi32Experimentalx86-64 and avx512f

Add packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_add_epi64Experimentalx86-64 and avx512f

Add packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_add_pdExperimentalx86-64 and avx512f

Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_add_psExperimentalx86-64 and avx512f

Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_add_round_pdExperimentalx86-64 and avx512f

Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_add_round_psExperimentalx86-64 and avx512f

Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_and_epi32Experimentalx86-64 and avx512f

Performs element-by-element bitwise AND between packed 32-bit integer elements of v2 and v3, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_and_epi64Experimentalx86-64 and avx512f

Compute the bitwise AND of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cmp_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_round_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmp_round_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpeq_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpeq_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpeq_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpeq_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpeq_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpeq_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpge_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpge_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpge_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpge_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpgt_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpgt_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpgt_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpgt_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmple_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmple_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmple_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmple_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmple_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmple_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmplt_epi32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmplt_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmplt_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmplt_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmplt_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmplt_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpneq_epi32_maskExperimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpneq_epi64_maskExperimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpneq_epu32_maskExperimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpneq_epu64_maskExperimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpneq_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpneq_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpnle_pd_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpnle_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpnlt_pd_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpnlt_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set).

_mm512_mask_cmpord_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector.

_mm512_mask_cmpord_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector.

_mm512_mask_cmpunord_pd_maskExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector.

_mm512_mask_cmpunord_ps_maskExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector.

_mm512_mask_cvt_roundps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvt_roundps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvt_roundps_pdExperimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_cvtps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvtps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvtps_pdExperimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvtt_roundpd_epi32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_cvtt_roundpd_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_cvtt_roundps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_cvtt_roundps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_cvttpd_epi32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvttpd_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvttps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_cvttps_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_div_pdExperimentalx86-64 and avx512f

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_div_psExperimentalx86-64 and avx512f

Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_div_round_pdExperimentalx86-64 and avx512f

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_div_round_psExperimentalx86-64 and avx512f

Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_fmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmaddsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmaddsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmaddsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmaddsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsubadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsubadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsubadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fmsubadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_fnmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_getexp_pdExperimentalx86-64 and avx512f

Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.

_mm512_mask_getexp_psExperimentalx86-64 and avx512f

Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.

_mm512_mask_getexp_round_pdExperimentalx86-64 and avx512f

Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_getexp_round_psExperimentalx86-64 and avx512f

Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_getmant_pdExperimentalx86-64 and avx512f

Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1

_mm512_mask_getmant_psExperimentalx86-64 and avx512f

Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1

_mm512_mask_getmant_round_pdExperimentalx86-64 and avx512f

Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_getmant_round_psExperimentalx86-64 and avx512f

Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_i32gather_epi32Experimentalx86-64 and avx512f

Gather 32-bit integers from memory using 32-bit indices.

_mm512_mask_i32gather_epi64Experimentalx86-64 and avx512f

Gather 64-bit integers from memory using 32-bit indices.

_mm512_mask_i32gather_pdExperimentalx86-64 and avx512f

Gather double-precision (64-bit) floating-point elements from memory using 32-bit indices.

_mm512_mask_i32gather_psExperimentalx86-64 and avx512f

Gather single-precision (32-bit) floating-point elements from memory using 32-bit indices.

_mm512_mask_i32scatter_epi32Experimentalx86-64 and avx512f

Scatter 32-bit integers from src into memory using 32-bit indices.

_mm512_mask_i32scatter_epi64Experimentalx86-64 and avx512f

Scatter 64-bit integers from src into memory using 32-bit indices.

_mm512_mask_i32scatter_pdExperimentalx86-64 and avx512f

Scatter double-precision (64-bit) floating-point elements from src into memory using 32-bit indices.

_mm512_mask_i32scatter_psExperimentalx86-64 and avx512f

Scatter single-precision (32-bit) floating-point elements from src into memory using 32-bit indices.

_mm512_mask_i64gather_epi32Experimentalx86-64 and avx512f

Gather 32-bit integers from memory using 64-bit indices.

_mm512_mask_i64gather_epi64Experimentalx86-64 and avx512f

Gather 64-bit integers from memory using 64-bit indices.

_mm512_mask_i64gather_pdExperimentalx86-64 and avx512f

Gather double-precision (64-bit) floating-point elements from memory using 64-bit indices.

_mm512_mask_i64gather_psExperimentalx86-64 and avx512f

Gather single-precision (32-bit) floating-point elements from memory using 64-bit indices.

_mm512_mask_i64scatter_epi32Experimentalx86-64 and avx512f

Scatter 32-bit integers from src into memory using 64-bit indices.

_mm512_mask_i64scatter_epi64Experimentalx86-64 and avx512f

Scatter 64-bit integers from src into memory using 64-bit indices.

_mm512_mask_i64scatter_pdExperimentalx86-64 and avx512f

Scatter double-precision (64-bit) floating-point elements from src into memory using 64-bit indices.

_mm512_mask_i64scatter_psExperimentalx86-64 and avx512f

Scatter single-precision (32-bit) floating-point elements from src into memory using 64-bit indices.

_mm512_mask_max_epi32Experimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_max_epi64Experimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_max_epu32Experimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_max_epu64Experimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_max_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_max_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_max_round_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_max_round_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_min_epi32Experimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_min_epi64Experimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_min_epu32Experimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_min_epu64Experimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_min_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_min_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_min_round_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_min_round_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_mask_movedup_pdExperimentalx86-64 and avx512f

Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_movehdup_psExperimentalx86-64 and avx512f

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_moveldup_psExperimentalx86-64 and avx512f

Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_mul_epi32Experimentalx86-64 and avx512f

Multiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_mul_epu32Experimentalx86-64 and avx512f

Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_mul_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). RM.

_mm512_mask_mul_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). RM.

_mm512_mask_mul_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_mul_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_mullo_epi32Experimentalx86-64 and avx512f

Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_mullox_epi64Experimentalx86-64 and avx512f

Multiplies elements in packed 64-bit integer vectors a and b together, storing the lower 64 bits of the result in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_or_epi32Experimentalx86-64 and avx512f

Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_or_epi64Experimentalx86-64 and avx512f

Compute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permute_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permute_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutevar_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Note that this intrinsic shuffles across 128-bit lanes, unlike past intrinsics that use the permutevar name. This intrinsic is identical to _mm512_mask_permutexvar_epi32, and it is recommended that you use that intrinsic name.

_mm512_mask_permutevar_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutevar_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutex2var_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_permutex2var_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_permutex2var_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_permutex2var_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).

_mm512_mask_permutex_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutex_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutexvar_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutexvar_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutexvar_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_permutexvar_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rcp14_pdExperimentalx86-64 and avx512f

Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_mask_rcp14_psExperimentalx86-64 and avx512f

Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_mask_rol_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rol_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rolv_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rolv_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_ror_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_ror_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rorv_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rorv_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_rsqrt14_pdExperimentalx86-64 and avx512f

Compute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_mask_rsqrt14_psExperimentalx86-64 and avx512f

Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_mask_shuffle_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_shuffle_f32x4Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_shuffle_f64x2Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_shuffle_i32x4Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_shuffle_i64x2Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_shuffle_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_shuffle_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sll_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sll_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_slli_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_slli_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sllv_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sllv_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sqrt_pdExperimentalx86-64 and avx512f

Compute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sqrt_psExperimentalx86-64 and avx512f

Compute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sqrt_round_pdExperimentalx86-64 and avx512f

Compute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sqrt_round_psExperimentalx86-64 and avx512f

Compute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sra_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sra_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srai_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srai_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srav_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srav_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srl_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srl_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srli_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srli_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srlv_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_srlv_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sub_epi32Experimentalx86-64 and avx512f

Subtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sub_epi64Experimentalx86-64 and avx512f

Subtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sub_pdExperimentalx86-64 and avx512f

Subtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sub_psExperimentalx86-64 and avx512f

Subtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sub_round_pdExperimentalx86-64 and avx512f

Subtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_sub_round_psExperimentalx86-64 and avx512f

Subtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_xor_epi32Experimentalx86-64 and avx512f

Compute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_mask_xor_epi64Experimentalx86-64 and avx512f

Compute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_maskz_abs_epi32Experimentalx86-64 and avx512f

Computes the absolute value of packed 32-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_abs_epi64Experimentalx86-64 and avx512f

Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_add_epi32Experimentalx86-64 and avx512f

Add packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_add_epi64Experimentalx86-64 and avx512f

Add packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_add_pdExperimentalx86-64 and avx512f

Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_add_psExperimentalx86-64 and avx512f

Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_add_round_pdExperimentalx86-64 and avx512f

Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_add_round_psExperimentalx86-64 and avx512f

Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_and_epi32Experimentalx86-64 and avx512f

Compute the bitwise AND of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_and_epi64Experimentalx86-64 and avx512f

Compute the bitwise AND of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvt_roundps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvt_roundps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvt_roundps_pdExperimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_cvtps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvtps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvtps_pdExperimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvtt_roundpd_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_cvtt_roundpd_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_cvtt_roundps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_cvtt_roundps_epu32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_cvttpd_epi32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvttpd_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvttps_epi32Experimentalx86-64 and avx512f

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_cvttps_epu32Experimentalx86-64 and avx512f

Convert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_div_pdExperimentalx86-64 and avx512f

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_div_psExperimentalx86-64 and avx512f

Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_div_round_pdExperimentalx86-64 and avx512f

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_div_round_psExperimentalx86-64 and avx512f

Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in a using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmaddsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmaddsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmaddsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmaddsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsubadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsubadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsubadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fmsubadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmadd_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmadd_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmadd_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmadd_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmsub_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmsub_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmsub_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_fnmsub_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_getexp_pdExperimentalx86-64 and avx512f

Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.

_mm512_maskz_getexp_psExperimentalx86-64 and avx512f

Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.

_mm512_maskz_getexp_round_pdExperimentalx86-64 and avx512f

Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_getexp_round_psExperimentalx86-64 and avx512f

Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_getmant_pdExperimentalx86-64 and avx512f

Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1

_mm512_maskz_getmant_psExperimentalx86-64 and avx512f

Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1

_mm512_maskz_getmant_round_pdExperimentalx86-64 and avx512f

Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_getmant_round_psExperimentalx86-64 and avx512f

Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_max_epi32Experimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_max_epi64Experimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_max_epu32Experimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_max_epu64Experimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_max_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_max_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_max_round_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_max_round_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_min_epi32Experimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_min_epi64Experimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_min_epu32Experimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_min_epu64Experimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_min_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_min_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_min_round_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_min_round_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_maskz_movedup_pdExperimentalx86-64 and avx512f

Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_movehdup_psExperimentalx86-64 and avx512f

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_moveldup_psExperimentalx86-64 and avx512f

Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mul_epi32Experimentalx86-64 and avx512f

Multiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mul_epu32Experimentalx86-64 and avx512f

Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mul_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mul_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mul_round_pdExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mul_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_mullo_epi32Experimentalx86-64 and avx512f

Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_or_epi32Experimentalx86-64 and avx512f

Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_or_epi64Experimentalx86-64 and avx512f

Compute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permute_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permute_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutevar_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutevar_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutex2var_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutex2var_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutex2var_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutex2var_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutex_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutex_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutexvar_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutexvar_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutexvar_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_permutexvar_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rcp14_pdExperimentalx86-64 and avx512f

Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_maskz_rcp14_psExperimentalx86-64 and avx512f

Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_maskz_rol_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rol_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rolv_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rolv_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_ror_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_ror_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rorv_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rorv_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_rsqrt14_pdExperimentalx86-64 and avx512f

Compute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_maskz_rsqrt14_psExperimentalx86-64 and avx512f

Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.

_mm512_maskz_shuffle_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_shuffle_f32x4Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_shuffle_f64x2Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_shuffle_i32x4Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_shuffle_i64x2Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_shuffle_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_shuffle_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sll_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sll_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_slli_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_slli_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sllv_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sllv_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sqrt_pdExperimentalx86-64 and avx512f

Compute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sqrt_psExperimentalx86-64 and avx512f

Compute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sqrt_round_pdExperimentalx86-64 and avx512f

Compute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sqrt_round_psExperimentalx86-64 and avx512f

Compute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sra_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sra_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srai_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srai_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srav_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srav_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srl_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srl_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srli_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srli_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srlv_epi32Experimentalx86-64 and avx512f

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_srlv_epi64Experimentalx86-64 and avx512f

Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sub_epi32Experimentalx86-64 and avx512f

Subtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sub_epi64Experimentalx86-64 and avx512f

Subtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sub_pdExperimentalx86-64 and avx512f

Subtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sub_psExperimentalx86-64 and avx512f

Subtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sub_round_pdExperimentalx86-64 and avx512f

Subtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_sub_round_psExperimentalx86-64 and avx512f

Subtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_xor_epi32Experimentalx86-64 and avx512f

Compute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_maskz_xor_epi64Experimentalx86-64 and avx512f

Compute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_max_epi32Experimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b, and store packed maximum values in dst.

_mm512_max_epi64Experimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst.

_mm512_max_epu32Experimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst.

_mm512_max_epu64Experimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst.

_mm512_max_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst.

_mm512_max_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst.

_mm512_max_round_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_max_round_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_min_epi32Experimentalx86-64 and avx512f

Compare packed signed 32-bit integers in a and b, and store packed minimum values in dst.

_mm512_min_epi64Experimentalx86-64 and avx512f

Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst.

_mm512_min_epu32Experimentalx86-64 and avx512f

Compare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst.

_mm512_min_epu64Experimentalx86-64 and avx512f

Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst.

_mm512_min_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst.

_mm512_min_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst.

_mm512_min_round_pdExperimentalx86-64 and avx512f

Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_min_round_psExperimentalx86-64 and avx512f

Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.

_mm512_movedup_pdExperimentalx86-64 and avx512f

Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst.

_mm512_movehdup_psExperimentalx86-64 and avx512f

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst.

_mm512_moveldup_psExperimentalx86-64 and avx512f

Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst.

_mm512_mul_epi32Experimentalx86-64 and avx512f

Multiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst.

_mm512_mul_epu32Experimentalx86-64 and avx512f

Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst.

_mm512_mul_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst.

_mm512_mul_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst.

_mm512_mul_round_pdExperimentalx86-64 and avx512f

Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst.

_mm512_mul_round_psExperimentalx86-64 and avx512f

Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst.

_mm512_mullo_epi32Experimentalx86-64 and avx512f

Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst.

_mm512_mullox_epi64Experimentalx86-64 and avx512f

Multiplies elements in packed 64-bit integer vectors a and b together, storing the lower 64 bits of the result in dst.

_mm512_or_epi32Experimentalx86-64 and avx512f

Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst.

_mm512_or_epi64Experimentalx86-64 and avx512f

Compute the bitwise OR of packed 64-bit integers in a and b, and store the resut in dst.

_mm512_or_si512Experimentalx86-64 and avx512f

Compute the bitwise OR of 512 bits (representing integer data) in a and b, and store the result in dst.

_mm512_permute_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst.

_mm512_permute_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst.

_mm512_permutevar_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst. Note that this intrinsic shuffles across 128-bit lanes, unlike past intrinsics that use the permutevar name. This intrinsic is identical to _mm512_permutexvar_epi32, and it is recommended that you use that intrinsic name.

_mm512_permutevar_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst.

_mm512_permutevar_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst.

_mm512_permutex2var_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.

_mm512_permutex2var_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.

_mm512_permutex2var_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.

_mm512_permutex2var_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.

_mm512_permutex_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst.

_mm512_permutex_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst.

_mm512_permutexvar_epi32Experimentalx86-64 and avx512f

Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst.

_mm512_permutexvar_epi64Experimentalx86-64 and avx512f

Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst.

_mm512_permutexvar_pdExperimentalx86-64 and avx512f

Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst.

_mm512_permutexvar_psExperimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx.

_mm512_rcp14_pdExperimentalx86-64 and avx512f

Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14.

_mm512_rcp14_psExperimentalx86-64 and avx512f

Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14.

_mm512_rol_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst.

_mm512_rol_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst.

_mm512_rolv_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst.

_mm512_rolv_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst.

_mm512_ror_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst.

_mm512_ror_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst.

_mm512_rorv_epi32Experimentalx86-64 and avx512f

Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst.

_mm512_rorv_epi64Experimentalx86-64 and avx512f

Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst.

_mm512_rsqrt14_pdExperimentalx86-64 and avx512f

Compute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14.

_mm512_rsqrt14_psExperimentalx86-64 and avx512f

Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14.

_mm512_set1_epi32Experimentalx86-64 and avx512f

Broadcast 32-bit integer a to all elements of dst.

_mm512_set1_epi64Experimentalx86-64 and avx512f

Broadcast 64-bit integer a to all elements of dst.

_mm512_set1_pdExperimentalx86-64 and avx512f

Broadcast 64-bit float a to all elements of dst.

_mm512_set1_psExperimentalx86-64 and avx512f

Broadcast 32-bit float a to all elements of dst.

_mm512_set_epi32Experimentalx86-64 and avx512f

Sets packed 32-bit integers in dst with the supplied values.

_mm512_set_epi64Experimentalx86-64 and avx512f

Sets packed 64-bit integers in dst with the supplied values.

_mm512_set_pdExperimentalx86-64 and avx512f

Sets packed 64-bit integers in dst with the supplied values.

_mm512_set_psExperimentalx86-64 and avx512f

Sets packed 32-bit integers in dst with the supplied values.

_mm512_setr_epi32Experimentalx86-64 and avx512f

Sets packed 32-bit integers in dst with the supplied values in reverse order.

_mm512_setr_epi64Experimentalx86-64 and avx512f

Sets packed 64-bit integers in dst with the supplied values in reverse order.

_mm512_setr_pdExperimentalx86-64 and avx512f

Sets packed 64-bit integers in dst with the supplied values in reverse order.

_mm512_setr_psExperimentalx86-64 and avx512f

Sets packed 32-bit integers in dst with the supplied values in reverse order.

_mm512_setzero_pdExperimentalx86-64 and avx512f

Returns vector of type __m512d with all elements set to zero.

_mm512_setzero_psExperimentalx86-64 and avx512f

Returns vector of type __m512d with all elements set to zero.

_mm512_setzero_si512Experimentalx86-64 and avx512f

Returns vector of type __m512i with all elements set to zero.

_mm512_shuffle_epi32Experimentalx86-64 and avx512f

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst.

_mm512_shuffle_f32x4Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst.

_mm512_shuffle_f64x2Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst.

_mm512_shuffle_i32x4Experimentalx86-64 and avx512f

Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst.

_mm512_shuffle_i64x2