[][src]Module core::arch::aarch64

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on AArch64 only.

Platform-specific intrinsics for the aarch64 platform.

See the module documentation for more details.

Structs

APSRExperimentalAArch64

Application Program Status Register

SYExperimentalAArch64

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed f32.

float64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed f64.

float64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed f64.

int16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed i64.

int8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed i8.

int8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x16x2_tExperimentalAArch64

ARM-specific type containing two int8x16_t vectors.

int8x16x3_tExperimentalAArch64

ARM-specific type containing three int8x16_t vectors.

int8x16x4_tExperimentalAArch64

ARM-specific type containing four int8x16_t vectors.

int8x8x2_tExperimentalAArch64

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimentalAArch64

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimentalAArch64

ARM-specific type containing four int8x8_t vectors.

poly64_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly128_tExperimentalAArch64

ARM-specific 128-bit wide vector of one packed p64.

poly16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

poly16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

poly64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed p64.

poly8x8_tExperimentalAArch64

ARM-specific 64-bit wide polynomial vector of eight packed u8.

poly8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

poly8x16x2_tExperimentalAArch64

ARM-specific type containing two poly8x16_t vectors.

poly8x16x3_tExperimentalAArch64

ARM-specific type containing three poly8x16_t vectors.

poly8x16x4_tExperimentalAArch64

ARM-specific type containing four poly8x16_t vectors.

poly8x8x2_tExperimentalAArch64

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimentalAArch64

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimentalAArch64

ARM-specific type containing four poly8x8_t vectors.

uint16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed u64.

uint8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed u8.

uint8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x16x2_tExperimentalAArch64

ARM-specific type containing two uint8x16_t vectors.

uint8x16x3_tExperimentalAArch64

ARM-specific type containing three uint8x16_t vectors.

uint8x16x4_tExperimentalAArch64

ARM-specific type containing four uint8x16_t vectors.

uint8x8x2_tExperimentalAArch64

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimentalAArch64

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimentalAArch64

ARM-specific type containing four uint8x8_t vectors.

Constants

_TMFAILURE_CNCLExperimentalAArch64

Transaction executed a TCANCEL instruction

_TMFAILURE_DBGExperimentalAArch64

Transaction aborted due to a debug trap.

_TMFAILURE_ERRExperimentalAArch64

Transaction aborted because a non-permissible operation was attempted

_TMFAILURE_IMPExperimentalAArch64

Fallback error type for any other reason

_TMFAILURE_INTExperimentalAArch64

Transaction failed from interrupt

_TMFAILURE_MEMExperimentalAArch64

Transaction aborted because a conflict occurred

_TMFAILURE_NESTExperimentalAArch64

Transaction aborted due to transactional nesting level was exceeded

_TMFAILURE_REASONExperimentalAArch64

Extraction mask for failure reason

_TMFAILURE_RTRYExperimentalAArch64

Transaction retry is possible.

_TMFAILURE_SIZEExperimentalAArch64

Transaction aborted due to read or write set limit was exceeded

_TMFAILURE_TRIVIALExperimentalAArch64

Indicates a TRIVIAL version of TM is available

_TMSTART_SUCCESSExperimentalAArch64

Transaction successfully started.

Functions

__breakpointExperimentalAArch64

Inserts a breakpoint instruction.

__crc32dExperimentalAArch64 and crc

CRC32 single round checksum for quad words (64 bits).

__crc32cdExperimentalAArch64 and crc

CRC32-C single round checksum for quad words (64 bits).

__dmbExperimentalAArch64

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimentalAArch64

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimentalAArch64

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__nopExperimentalAArch64

Generates an unspecified no-op instruction.

__rsrExperimentalAArch64

Reads a 32-bit system register

__rsrpExperimentalAArch64

Reads a system register containing an address

__tcancelExperimentalAArch64 and tme

Cancels the current transaction and discards all state modifications that were performed transactionally.

__tcommitExperimentalAArch64 and tme

Commits the current transaction. For a nested transaction, the only effect is that the transactional nesting depth is decreased. For an outer transaction, the state modifications performed transactionally are committed to the architectural state.

__tstartExperimentalAArch64 and tme

Starts a new transaction. When the transaction starts successfully the return value is 0. If the transaction fails, all state modifications are discarded and a cause of the failure is encoded in the return value.

__ttestExperimentalAArch64 and tme

Tests if executing inside a transaction. If no transaction is currently executing, the return value is 0. Otherwise, this intrinsic returns the depth of the transaction.

__wsrExperimentalAArch64

Writes a 32-bit system register

__wsrpExperimentalAArch64

Writes a system register containing an address

_cls_u32ExperimentalAArch64

Counts the leading most significant bits set.

_cls_u64ExperimentalAArch64

Counts the leading most significant bits set.

_clz_u64ExperimentalAArch64

Count Leading Zeros.

_rbit_u64ExperimentalAArch64

Reverse the bit order.

_rev_u16ExperimentalAArch64

Reverse the order of the bytes.

_rev_u32ExperimentalAArch64

Reverse the order of the bytes.

_rev_u64ExperimentalAArch64

Reverse the order of the bytes.

brkExperimentalAArch64

Generates the trap instruction BRK 1

vadd_f32ExperimentalAArch64 and neon

Vector add.

vadd_f64ExperimentalAArch64 and neon

Vector add.

vadd_s8ExperimentalAArch64 and neon

Vector add.

vadd_s16ExperimentalAArch64 and neon

Vector add.

vadd_s32ExperimentalAArch64 and neon

Vector add.

vadd_u8ExperimentalAArch64 and neon

Vector add.

vadd_u16ExperimentalAArch64 and neon

Vector add.

vadd_u32ExperimentalAArch64 and neon

Vector add.

vaddd_s64ExperimentalAArch64 and neon

Vector add.

vaddd_u64ExperimentalAArch64 and neon

Vector add.

vaddl_s8ExperimentalAArch64 and neon

Vector long add.

vaddl_s16ExperimentalAArch64 and neon

Vector long add.

vaddl_s32ExperimentalAArch64 and neon

Vector long add.

vaddl_u8ExperimentalAArch64 and neon

Vector long add.

vaddl_u16ExperimentalAArch64 and neon

Vector long add.

vaddl_u32ExperimentalAArch64 and neon

Vector long add.

vaddq_f32ExperimentalAArch64 and neon

Vector add.

vaddq_f64ExperimentalAArch64 and neon

Vector add.

vaddq_s8ExperimentalAArch64 and neon

Vector add.

vaddq_s16ExperimentalAArch64 and neon

Vector add.

vaddq_s32ExperimentalAArch64 and neon

Vector add.

vaddq_s64ExperimentalAArch64 and neon

Vector add.

vaddq_u8ExperimentalAArch64 and neon

Vector add.

vaddq_u16ExperimentalAArch64 and neon

Vector add.

vaddq_u32ExperimentalAArch64 and neon

Vector add.

vaddq_u64ExperimentalAArch64 and neon

Vector add.

vaesdq_u8ExperimentalAArch64 and crypto

AES single round decryption.

vaeseq_u8ExperimentalAArch64 and crypto

AES single round encryption.

vaesimcq_u8ExperimentalAArch64 and crypto

AES inverse mix columns.

vaesmcq_u8ExperimentalAArch64 and crypto

AES mix columns.

vand_s8ExperimentalAArch64 and neon

Vector bitwise and

vand_s16ExperimentalAArch64 and neon

Vector bitwise and

vand_s32ExperimentalAArch64 and neon

Vector bitwise and

vand_s64ExperimentalAArch64 and neon

Vector bitwise and

vand_u8ExperimentalAArch64 and neon

Vector bitwise and

vand_u16ExperimentalAArch64 and neon

Vector bitwise and

vand_u32ExperimentalAArch64 and neon

Vector bitwise and

vand_u64ExperimentalAArch64 and neon

Vector bitwise and

vandq_s8ExperimentalAArch64 and neon

Vector bitwise and

vandq_s16ExperimentalAArch64 and neon

Vector bitwise and

vandq_s32ExperimentalAArch64 and neon

Vector bitwise and

vandq_s64ExperimentalAArch64 and neon

Vector bitwise and

vandq_u8ExperimentalAArch64 and neon

Vector bitwise and

vandq_u16ExperimentalAArch64 and neon

Vector bitwise and

vandq_u32ExperimentalAArch64 and neon

Vector bitwise and

vandq_u64ExperimentalAArch64 and neon

Vector bitwise and

vceq_f32ExperimentalAArch64 and neon

Floating-point compare equal

vceq_f64ExperimentalAArch64 and neon

Floating-point compare equal

vceq_p64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_s8ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_s16ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_s32ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_s64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_u8ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_u16ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_u32ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_u64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_f32ExperimentalAArch64 and neon

Floating-point compare equal

vceqq_f64ExperimentalAArch64 and neon

Floating-point compare equal

vceqq_p64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_s8ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_s16ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_s32ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_s64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_u8ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_u16ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_u32ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_u64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vcge_f32ExperimentalAArch64 and neon

Floating-point compare greater than or equal

vcge_f64ExperimentalAArch64 and neon

Floating-point compare greater than or equal

vcge_s8ExperimentalAArch64 and neon

Compare signed greater than or equal

vcge_s16ExperimentalAArch64 and neon

Compare signed greater than or equal

vcge_s32ExperimentalAArch64 and neon

Compare signed greater than or equal

vcge_s64ExperimentalAArch64 and neon

Compare signed greater than or equal

vcge_u8ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcge_u16ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcge_u32ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcge_u64ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgeq_f32ExperimentalAArch64 and neon

Floating-point compare greater than or equal

vcgeq_f64ExperimentalAArch64 and neon

Floating-point compare greater than or equal

vcgeq_s8ExperimentalAArch64 and neon

Compare signed greater than or equal

vcgeq_s16ExperimentalAArch64 and neon

Compare signed greater than or equal

vcgeq_s32ExperimentalAArch64 and neon

Compare signed greater than or equal

vcgeq_s64ExperimentalAArch64 and neon

Compare signed greater than or equal

vcgeq_u8ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgeq_u16ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgeq_u32ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgeq_u64ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgt_f32ExperimentalAArch64 and neon

Floating-point compare greater than

vcgt_f64ExperimentalAArch64 and neon

Floating-point compare greater than

vcgt_s8ExperimentalAArch64 and neon

Compare signed greater than

vcgt_s16ExperimentalAArch64 and neon

Compare signed greater than

vcgt_s32ExperimentalAArch64 and neon

Compare signed greater than

vcgt_s64ExperimentalAArch64 and neon

Compare signed greater than

vcgt_u8ExperimentalAArch64 and neon

Compare unsigned highe

vcgt_u16ExperimentalAArch64 and neon

Compare unsigned highe

vcgt_u32ExperimentalAArch64 and neon

Compare unsigned highe

vcgt_u64ExperimentalAArch64 and neon

Compare unsigned highe

vcgtq_f32ExperimentalAArch64 and neon

Floating-point compare greater than

vcgtq_f64ExperimentalAArch64 and neon

Floating-point compare greater than

vcgtq_s8ExperimentalAArch64 and neon

Compare signed greater than

vcgtq_s16ExperimentalAArch64 and neon

Compare signed greater than

vcgtq_s32ExperimentalAArch64 and neon

Compare signed greater than

vcgtq_s64ExperimentalAArch64 and neon

Compare signed greater than

vcgtq_u8ExperimentalAArch64 and neon

Compare unsigned highe

vcgtq_u16ExperimentalAArch64 and neon

Compare unsigned highe

vcgtq_u32ExperimentalAArch64 and neon

Compare unsigned highe

vcgtq_u64ExperimentalAArch64 and neon

Compare unsigned highe

vcle_f32ExperimentalAArch64 and neon

Floating-point compare less than or equal

vcle_f64ExperimentalAArch64 and neon

Floating-point compare less than or equal

vcle_s8ExperimentalAArch64 and neon

Compare signed less than or equal

vcle_s16ExperimentalAArch64 and neon

Compare signed less than or equal

vcle_s32ExperimentalAArch64 and neon

Compare signed less than or equal

vcle_s64ExperimentalAArch64 and neon

Compare signed less than or equal

vcle_u8ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcle_u16ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcle_u32ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcle_u64ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcleq_f32ExperimentalAArch64 and neon

Floating-point compare less than or equal

vcleq_f64ExperimentalAArch64 and neon

Floating-point compare less than or equal

vcleq_s8ExperimentalAArch64 and neon

Compare signed less than or equal

vcleq_s16ExperimentalAArch64 and neon

Compare signed less than or equal

vcleq_s32ExperimentalAArch64 and neon

Compare signed less than or equal

vcleq_s64ExperimentalAArch64 and neon

Compare signed less than or equal

vcleq_u8ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcleq_u16ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcleq_u32ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcleq_u64ExperimentalAArch64 and neon

Compare unsigned less than or equal

vclt_f32ExperimentalAArch64 and neon

Floating-point compare less than

vclt_f64ExperimentalAArch64 and neon

Floating-point compare less than

vclt_s8ExperimentalAArch64 and neon

Compare signed less than

vclt_s16ExperimentalAArch64 and neon

Compare signed less than

vclt_s32ExperimentalAArch64 and neon

Compare signed less than

vclt_s64ExperimentalAArch64 and neon

Compare signed less than

vclt_u8ExperimentalAArch64 and neon

Compare unsigned less than

vclt_u16ExperimentalAArch64 and neon

Compare unsigned less than

vclt_u32ExperimentalAArch64 and neon

Compare unsigned less than

vclt_u64ExperimentalAArch64 and neon

Compare unsigned less than

vcltq_f32ExperimentalAArch64 and neon

Floating-point compare less than

vcltq_f64ExperimentalAArch64 and neon

Floating-point compare less than

vcltq_s8ExperimentalAArch64 and neon

Compare signed less than

vcltq_s16ExperimentalAArch64 and neon

Compare signed less than

vcltq_s32ExperimentalAArch64 and neon

Compare signed less than

vcltq_s64ExperimentalAArch64 and neon

Compare signed less than

vcltq_u8ExperimentalAArch64 and neon

Compare unsigned less than

vcltq_u16ExperimentalAArch64 and neon

Compare unsigned less than

vcltq_u32ExperimentalAArch64 and neon

Compare unsigned less than

vcltq_u64ExperimentalAArch64 and neon

Compare unsigned less than

vcombine_f32ExperimentalAArch64 and neon

Vector combine

vcombine_f64ExperimentalAArch64 and neon

Vector combine

vcombine_p8ExperimentalAArch64 and neon

Vector combine

vcombine_p16ExperimentalAArch64 and neon

Vector combine

vcombine_p64ExperimentalAArch64 and neon

Vector combine

vcombine_s8ExperimentalAArch64 and neon

Vector combine

vcombine_s16ExperimentalAArch64 and neon

Vector combine

vcombine_s32ExperimentalAArch64 and neon

Vector combine

vcombine_s64ExperimentalAArch64 and neon

Vector combine

vcombine_u8ExperimentalAArch64 and neon

Vector combine

vcombine_u16ExperimentalAArch64 and neon

Vector combine

vcombine_u32ExperimentalAArch64 and neon

Vector combine

vcombine_u64ExperimentalAArch64 and neon

Vector combine

vdupq_n_s8ExperimentalAArch64 and neon

Duplicate vector element to vector or scalar

vdupq_n_u8ExperimentalAArch64 and neon

Duplicate vector element to vector or scalar

veor_s8ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_s16ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_s32ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_s64ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_u8ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_u16ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_u32ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veor_u64ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_s8ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_s16ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_s32ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_s64ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_u8ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_u16ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_u32ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

veorq_u64ExperimentalAArch64 and neon

Vector bitwise exclusive or (vector)

vextq_s8ExperimentalAArch64 and neon

Extract vector from pair of vectors

vextq_u8ExperimentalAArch64 and neon

Extract vector from pair of vectors

vget_lane_u8ExperimentalAArch64 and neon

Move vector element to general-purpose register

vget_lane_u64ExperimentalAArch64 and neon

Move vector element to general-purpose register

vgetq_lane_u16ExperimentalAArch64 and neon

Move vector element to general-purpose register

vgetq_lane_u32ExperimentalAArch64 and neon

Move vector element to general-purpose register

vgetq_lane_u64ExperimentalAArch64 and neon

Move vector element to general-purpose register

vhadd_s8ExperimentalAArch64 and neon

Halving add

vhadd_s16ExperimentalAArch64 and neon

Halving add

vhadd_s32ExperimentalAArch64 and neon

Halving add

vhadd_u8ExperimentalAArch64 and neon

Halving add

vhadd_u16ExperimentalAArch64 and neon

Halving add

vhadd_u32ExperimentalAArch64 and neon

Halving add

vhaddq_s8ExperimentalAArch64 and neon

Halving add

vhaddq_s16ExperimentalAArch64 and neon

Halving add

vhaddq_s32ExperimentalAArch64 and neon

Halving add

vhaddq_u8ExperimentalAArch64 and neon

Halving add

vhaddq_u16ExperimentalAArch64 and neon

Halving add

vhaddq_u32ExperimentalAArch64 and neon

Halving add

vhsub_s8ExperimentalAArch64 and neon

Signed halving subtract

vhsub_s16ExperimentalAArch64 and neon

Signed halving subtract

vhsub_s32ExperimentalAArch64 and neon

Signed halving subtract

vhsub_u8ExperimentalAArch64 and neon

Signed halving subtract

vhsub_u16ExperimentalAArch64 and neon

Signed halving subtract

vhsub_u32ExperimentalAArch64 and neon

Signed halving subtract

vhsubq_s8ExperimentalAArch64 and neon

Signed halving subtract

vhsubq_s16ExperimentalAArch64 and neon

Signed halving subtract

vhsubq_s32ExperimentalAArch64 and neon

Signed halving subtract

vhsubq_u8ExperimentalAArch64 and neon

Signed halving subtract

vhsubq_u16ExperimentalAArch64 and neon

Signed halving subtract

vhsubq_u32ExperimentalAArch64 and neon

Signed halving subtract

vld1q_s8ExperimentalAArch64 and neon

Load multiple single-element structures to one, two, three, or four registers

vld1q_u8ExperimentalAArch64 and neon

Load multiple single-element structures to one, two, three, or four registers

vmaxv_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f64ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u32ExperimentalAArch64 and neon

Horizontal vector max.

vminv_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f64ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u32ExperimentalAArch64 and neon

Horizontal vector min.

vmovl_s8ExperimentalAArch64 and neon

Vector long move.

vmovl_s16ExperimentalAArch64 and neon

Vector long move.

vmovl_s32ExperimentalAArch64 and neon

Vector long move.

vmovl_u8ExperimentalAArch64 and neon

Vector long move.

vmovl_u16ExperimentalAArch64 and neon

Vector long move.

vmovl_u32ExperimentalAArch64 and neon

Vector long move.

vmovn_s16ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_s32ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_s64ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u16ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u32ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u64ExperimentalAArch64 and neon

Vector narrow integer.

vmovq_n_u8ExperimentalAArch64 and neon

Duplicate vector element to vector or scalar

vmul_f32ExperimentalAArch64 and neon

Multiply

vmul_f64ExperimentalAArch64 and neon

Multiply

vmul_s8ExperimentalAArch64 and neon

Multiply

vmul_s16ExperimentalAArch64 and neon

Multiply

vmul_s32ExperimentalAArch64 and neon

Multiply

vmul_u8ExperimentalAArch64 and neon

Multiply

vmul_u16ExperimentalAArch64 and neon

Multiply

vmul_u32ExperimentalAArch64 and neon

Multiply

vmull_p64ExperimentalAArch64 and neon

Polynomial multiply long

vmulq_f32ExperimentalAArch64 and neon

Multiply

vmulq_f64ExperimentalAArch64 and neon

Multiply

vmulq_s8ExperimentalAArch64 and neon

Multiply

vmulq_s16ExperimentalAArch64 and neon

Multiply

vmulq_s32ExperimentalAArch64 and neon

Multiply

vmulq_u8ExperimentalAArch64 and neon

Multiply

vmulq_u16ExperimentalAArch64 and neon

Multiply

vmulq_u32ExperimentalAArch64 and neon

Multiply

vmvn_p8ExperimentalAArch64 and neon

Vector bitwise not.

vmvn_s8ExperimentalAArch64 and neon

Vector bitwise not.

vmvn_s16ExperimentalAArch64 and neon

Vector bitwise not.

vmvn_s32ExperimentalAArch64 and neon

Vector bitwise not.

vmvn_u8ExperimentalAArch64 and neon

Vector bitwise not.

vmvn_u16ExperimentalAArch64 and neon

Vector bitwise not.

vmvn_u32ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_p8ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_s8ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_s16ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_s32ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_u8ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_u16ExperimentalAArch64 and neon

Vector bitwise not.

vmvnq_u32ExperimentalAArch64 and neon

Vector bitwise not.

vorr_s8ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_s16ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_s32ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_s64ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_u8ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_u16ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_u32ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorr_u64ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_s8ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_s16ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_s32ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_s64ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_u8ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_u16ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_u32ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vorrq_u64ExperimentalAArch64 and neon

Vector bitwise or (immediate, inclusive)

vpaddq_u8ExperimentalAArch64 and neon

Add pairwise

vpmax_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f64ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmin_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f64ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vqadd_s8ExperimentalAArch64 and neon

Saturating add

vqadd_s16ExperimentalAArch64 and neon

Saturating add

vqadd_s32ExperimentalAArch64 and neon

Saturating add

vqadd_u8ExperimentalAArch64 and neon

Saturating add

vqadd_u16ExperimentalAArch64 and neon

Saturating add

vqadd_u32ExperimentalAArch64 and neon

Saturating add

vqaddq_s8ExperimentalAArch64 and neon

Saturating add

vqaddq_s16ExperimentalAArch64 and neon

Saturating add

vqaddq_s32ExperimentalAArch64 and neon

Saturating add

vqaddq_u8ExperimentalAArch64 and neon

Saturating add

vqaddq_u16ExperimentalAArch64 and neon

Saturating add

vqaddq_u32ExperimentalAArch64 and neon

Saturating add

vqmovn_u64ExperimentalAArch64 and neon

Unsigned saturating extract narrow.

vqsub_s8ExperimentalAArch64 and neon

Saturating subtract

vqsub_s16ExperimentalAArch64 and neon

Saturating subtract

vqsub_s32ExperimentalAArch64 and neon

Saturating subtract

vqsub_u8ExperimentalAArch64 and neon

Saturating subtract

vqsub_u16ExperimentalAArch64 and neon

Saturating subtract

vqsub_u32ExperimentalAArch64 and neon

Saturating subtract

vqsubq_s8ExperimentalAArch64 and neon

Saturating subtract

vqsubq_s16ExperimentalAArch64 and neon

Saturating subtract

vqsubq_s32ExperimentalAArch64 and neon

Saturating subtract

vqsubq_u8ExperimentalAArch64 and neon

Saturating subtract

vqsubq_u16ExperimentalAArch64 and neon

Saturating subtract

vqsubq_u32ExperimentalAArch64 and neon

Saturating subtract

vqtbl1_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1_u8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_u8ExperimentalAArch64 and neon

Table look-up

vqtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_u8ExperimentalAArch64 and neon

Extended table look-up

vreinterpret_u64_u32ExperimentalAArch64 and neon

Vector reinterpret cast operation

vreinterpretq_s8_u8ExperimentalAArch64 and neon

Vector reinterpret cast operation

vreinterpretq_u16_u8ExperimentalAArch64 and neon

Vector reinterpret cast operation

vreinterpretq_u32_u8ExperimentalAArch64 and neon

Vector reinterpret cast operation

vreinterpretq_u64_u8ExperimentalAArch64 and neon

Vector reinterpret cast operation

vreinterpretq_u8_s8ExperimentalAArch64 and neon

Vector reinterpret cast operation

vrhadd_s8ExperimentalAArch64 and neon

Rounding halving add

vrhadd_s16ExperimentalAArch64 and neon

Rounding halving add

vrhadd_s32ExperimentalAArch64 and neon

Rounding halving add

vrhadd_u8ExperimentalAArch64 and neon

Rounding halving add

vrhadd_u16ExperimentalAArch64 and neon

Rounding halving add

vrhadd_u32ExperimentalAArch64 and neon

Rounding halving add

vrhaddq_s8ExperimentalAArch64 and neon

Rounding halving add

vrhaddq_s16ExperimentalAArch64 and neon

Rounding halving add

vrhaddq_s32ExperimentalAArch64 and neon

Rounding halving add

vrhaddq_u8ExperimentalAArch64 and neon

Rounding halving add

vrhaddq_u16ExperimentalAArch64 and neon

Rounding halving add

vrhaddq_u32ExperimentalAArch64 and neon

Rounding halving add

vrsqrte_f32ExperimentalAArch64 and neon

Reciprocal square-root estimate.

vsha1cq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, choose.

vsha1h_u32ExperimentalAArch64 and crypto

SHA1 fixed rotate.

vsha1mq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, majority.

vsha1pq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, parity.

vsha1su0q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, first part.

vsha1su1q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, second part.

vsha256h2q_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator, upper part.

vsha256hq_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator.

vsha256su0q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, first part.

vsha256su1q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, second part.

vshlq_n_u8ExperimentalAArch64 and neon

Shift right

vshrq_n_u8ExperimentalAArch64 and neon

Unsigned shift right

vsub_f32ExperimentalAArch64 and neon

Subtract

vsub_f64ExperimentalAArch64 and neon

Subtract

vsub_s8ExperimentalAArch64 and neon

Subtract

vsub_s16ExperimentalAArch64 and neon

Subtract

vsub_s32ExperimentalAArch64 and neon

Subtract

vsub_s64ExperimentalAArch64 and neon

Subtract

vsub_u8ExperimentalAArch64 and neon

Subtract

vsub_u16ExperimentalAArch64 and neon

Subtract

vsub_u32ExperimentalAArch64 and neon

Subtract

vsub_u64ExperimentalAArch64 and neon

Subtract

vsubq_f32ExperimentalAArch64 and neon

Subtract

vsubq_f64ExperimentalAArch64 and neon

Subtract

vsubq_s8ExperimentalAArch64 and neon

Subtract

vsubq_s16ExperimentalAArch64 and neon

Subtract

vsubq_s32ExperimentalAArch64 and neon

Subtract

vsubq_s64ExperimentalAArch64 and neon

Subtract

vsubq_u8ExperimentalAArch64 and neon

Subtract

vsubq_u16ExperimentalAArch64 and neon

Subtract

vsubq_u32ExperimentalAArch64 and neon

Subtract

vsubq_u64ExperimentalAArch64 and neon

Subtract

vtbl1_p8ExperimentalAArch64 and neon

Table look-up

vtbl1_s8ExperimentalAArch64 and neon

Table look-up

vtbl1_u8ExperimentalAArch64 and neon

Table look-up

vtbl2_p8ExperimentalAArch64 and neon

Table look-up

vtbl2_s8ExperimentalAArch64 and neon

Table look-up

vtbl2_u8ExperimentalAArch64 and neon

Table look-up

vtbl3_p8ExperimentalAArch64 and neon

Table look-up

vtbl3_s8ExperimentalAArch64 and neon

Table look-up

vtbl3_u8ExperimentalAArch64 and neon

Table look-up

vtbl4_p8ExperimentalAArch64 and neon

Table look-up

vtbl4_s8ExperimentalAArch64 and neon

Table look-up

vtbl4_u8ExperimentalAArch64 and neon

Table look-up

vtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_u8ExperimentalAArch64 and neon

Extended table look-up