Instruction | Meaning | Notes | Opcode |
AAA | ASCII adjust AL after addition | used with unpacked binary coded decimal | 0x37 |
AAD | ASCII adjust AX before division | 8086/8088 datasheet documents only base 10 version of the AAD instruction, but any other base will work. Later Intel's documentation has the generic form too. NEC V20 and V30 always use base 10, and ignore the argument, causing a number of incompatibilities | 0xD5 |
AAM | ASCII adjust AX after multiplication | Only base 10 version is documented, see notes for AAD | 0xD4 |
AAS | ASCII adjust AL after subtraction | | 0x3F |
ADC | Add with carry | destination := destination + source + carry_flag | 0x10…0x15, 0x80/2…0x83/2 |
ADD | Add | r/m += r/imm; r += m/imm; | 0x00…0x05, 0x80/0…0x83/0 |
AND | Logical AND | r/m &= r/imm; r &= m/imm; | 0x20…0x25, 0x80/4…0x83/4 |
CALL | Call procedure | | 0x9A, 0xE8, 0xFF/2, 0xFF/3 |
CBW | Convert byte to word | | 0x98 |
CLC | Clear carry flag | CF = 0; | 0xF8 |
CLD | Clear direction flag | DF = 0; | 0xFC |
CLI | Clear interrupt flag | IF = 0; | 0xFA |
CMC | Complement carry flag | | 0xF5 |
CMP | Compare operands | | 0x38…0x3D, 0x80/7…0x83/7 |
CMPSB | Compare bytes in memory | | 0xA6 |
CMPSW | Compare words | | 0xA7 |
CWD | Convert word to doubleword | | 0x99 |
DAA | Decimal adjust AL after addition | | 0x27 |
DAS | Decimal adjust AL after subtraction | | 0x2F |
DEC | Decrement by 1 | | 0x48…0x4F, 0xFE/1, 0xFF/1 |
DIV | Unsigned divide | DX:AX = DX:AX / r/m; resulting DX remainder | 0xF6/6, 0xF7/6 |
ESC | Used with floating-point unit | | 0xD8..0xDF |
HLT | Enter halt state | | 0xF4 |
IDIV | Signed divide | DX:AX = DX:AX / r/m; resulting DX remainder | 0xF6/7, 0xF7/7 |
IMUL | Signed multiply | DX:AX = AX * r/m; AX = AL * r/m | 0x69, 0x6B, 0xF6/5, 0xF7/5, 0x0FAF |
IN | Input from port | AL = port; AL = port; AX = port; AX = port; | 0xE4, 0xE5, 0xEC, 0xED |
INC | Increment by 1 | | 0x40…0x47, 0xFE/0, 0xFF/0 |
INT | Call to interrupt | | 0xCC, 0xCD |
INTO | Call to interrupt if overflow | | 0xCE |
IRET | Return from interrupt | | 0xCF |
Jcc | Jump if condition | | 0x70…0x7F, 0x0F80…0x0F8F |
JCXZ | Jump if CX is zero | | 0xE3 |
JMP | Jump | | 0xE9…0xEB, 0xFF/4, 0xFF/5 |
LAHF | Load FLAGS into AH register | | 0x9F |
LDS | Load pointer using DS | | 0xC5 |
LEA | Load Effective Address | | 0x8D |
LES | Load ES with pointer | | 0xC4 |
LOCK | Assert BUS LOCK# signal | | 0xF0 |
LODSB | Load string byte | | 0xAC |
LODSW | Load string word | | 0xAD |
LOOP/LOOPx | Loop control | | 0xE0…0xE2 |
MOV | Move | copies data from one location to another, r/m = r; r = r/m; | 0xA0...0xA3 |
MOVSB | Move byte from string to string | | 0xA4 |
MOVSW | Move word from string to string | | 0xA5 |
MUL | Unsigned multiply | DX:AX = AX * r/m; AX = AL * r/m; | 0xF6/4…0xF7/4 |
NEG | Two's complement negation | | 0xF6/3…0xF7/3 |
NOP | No operation | opcode equivalent to | 0x90 |
NOT | Negate the operand, logical NOT | | 0xF6/2…0xF7/2 |
OR | Logical OR | | 0x08…0x0D, 0x80…0x83/1 |
OUT | Output to port | port = AL; port = AL; port = AX; port = AX; | 0xE6, 0xE7, 0xEE, 0xEF |
POP | Pop data from stack | r/m = *SP++; POP CS works only on 8086/8088. Later CPUs use 0x0F as a prefix for newer instructions. | 0x07, 0x0F, 0x17, 0x1F, 0x58…0x5F, 0x8F/0 |
POPF | Pop FLAGS register from stack | FLAGS = *SP++; | 0x9D |
PUSH | Push data onto stack | | 0x06, 0x0E, 0x16, 0x1E, 0x50…0x57, 0x68, 0x6A, 0xFF/6 |
PUSHF | Push FLAGS onto stack | | 0x9C |
RCL | Rotate left | | 0xC0…0xC1/2, 0xD0…0xD3/2 |
RCR | Rotate right | | 0xC0…0xC1/3, 0xD0…0xD3/3 |
REPxx | Repeat MOVS/STOS/CMPS/LODS/SCAS | | 0xF2, 0xF3 |
RET | Return from procedure | Not a real instruction. The assembler will translate these to a RETN or a RETF depending on the memory model of the target system. | |
RETN | Return from near procedure | | 0xC2, 0xC3 |
RETF | Return from far procedure | | 0xCA, 0xCB |
ROL | Rotate left | | 0xC0…0xC1/0, 0xD0…0xD3/0 |
ROR | Rotate right | | 0xC0…0xC1/1, 0xD0…0xD3/1 |
SAHF | Store AH into FLAGS | | 0x9E |
SAL | Shift Arithmetically left | r/m <<= 1; r/m <<= CL; | 0xC0…0xC1/4, 0xD0…0xD3/4 |
SAR | Shift Arithmetically right | r/m >>= 1; r/m >>= CL; | 0xC0…0xC1/7, 0xD0…0xD3/7 |
SBB | Subtraction with borrow | alternative 1-byte encoding of SBB AL, AL is available via [|undocumented] SALC instruction | 0x18…0x1D, 0x80…0x83/3 |
SCASB | Compare byte string | | 0xAE |
SCASW | Compare word string | | 0xAF |
SHL | Shift left | | 0xC0…0xC1/4, 0xD0…0xD3/4 |
SHR | Shift right | | 0xC0…0xC1/5, 0xD0…0xD3/5 |
STC | Set carry flag | CF = 1; | 0xF9 |
STD | Set direction flag | DF = 1; | 0xFD |
STI | Set interrupt flag | IF = 1; | 0xFB |
STOSB | Store byte in string | | 0xAA |
STOSW | Store word in string | | 0xAB |
SUB | Subtraction | r/m -= r/imm; r -= m/imm; | 0x28…0x2D, 0x80…0x83/5 |
TEST | Logical compare | r/m & r/imm; r & m/imm; | 0x84, 0x84, 0xA8, 0xA9, 0xF6/0, 0xF7/0 |
WAIT | Wait until not busy | Waits until BUSY# pin is inactive | 0x9B |
XCHG | Exchange data | A spinlock typically uses xchg as an atomic operation.. | 0x86, 0x87, 0x91…0x97 |
XLAT | Table look-up translation | behaves like | 0xD7 |
XOR | Exclusive OR | r/m ^= r/imm; r ^= m/imm; | 0x30…0x35, 0x80…0x83/6 |
Instruction | Meaning | Notes |
BSF | Bit scan forward | |
BSR | Bit scan reverse | |
BT | Bit test | |
BTC | Bit test and complement | |
BTR | Bit test and reset | |
BTS | Bit test and set | |
CDQ | Convert double-word to quad-word | Sign-extends EAX into EDX, forming the quad-word EDX:EAX. Since DIV uses EDX:EAX as its input, CDQ must be called after setting EAX if EDX is not manually initialized before DIV. |
CMPSD | Compare string double-word | Compares ES: with DS: and increments or decrements both DI and SI, depending on DF; can be prefixed with REP |
CWDE | Convert word to double-word | Unlike CWD, CWDE sign-extends AX to EAX instead of AX to DX:AX |
IBTS | Insert Bit String | discontinued with B1 step of 80386 |
INSD | Input from port to string double-word | |
IRETx | Interrupt return; D suffix means 32-bit return, F suffix means do not generate epilogue code | Use IRETD rather than IRET in 32-bit situations |
JECXZ | Jump if ECX is zero | |
LFS, LGS | Load far pointer | |
LSS | Load stack segment | |
LODSD | Load string double-word | EAX = *ES:EDI±±; ; can be prefixed with REP |
LOOPW, LOOPccW | Loop, conditional loop | Same as LOOP, LOOPcc for earlier processors |
LOOPD, LOOPccD | Loop while equal | if goto lbl; , cc = Z, E, NonZero, NE |
MOV to/from CR/DR/TR | Move to/from special registers | CR=control registers, DR=debug registers, TR=test registers |
MOVSD | Move string double-word | *ES:EDI±± = *ESI±±; ; can be prefixed with REP |
MOVSX | Move with sign-extension | r = r/m; and similar |
MOVZX | Move with zero-extension | r = r/m; and similar |
OUTSD | Output to port from string double-word | port = *ESI±±; |
POPAD | Pop all double-word registers from stack | Does not pop register ESP off of stack |
POPFD | Pop data into EFLAGS register | |
PUSHAD | Push all double-word registers onto stack | |
PUSHFD | Push EFLAGS register onto stack | |
SCASD | Scan string data double-word | Compares ES: with EAX and increments or decrements DI, depending on DF; can be prefixed with REP |
SETcc | Set byte to one on condition, zero otherwise | |
SHLD | Shift left double-word | |
SHRD | Shift right double-word | r1 = r1>>CL ∣ r2<<; Instead of CL, immediate 1 can be used |
STOSD | Store string double-word | *ES:EDI±± = EAX; ; can be prefixed with REP |
XBTS | Extract Bit String | discontinued with B1 step of 80386 |
Instruction | Meaning | Notes |
F2XM1 | | more precise than for close to zero |
FABS | Absolute value | |
FADD | Add | |
FADDP | Add and pop | |
FBLD | Load BCD | |
FBSTP | Store BCD and pop | |
FCHS | Change sign | |
FCLEX | Clear exceptions | |
FCOM | Compare | |
FCOMP | Compare and pop | |
FCOMPP | Compare and pop twice | |
FDECSTP | Decrement floating point stack pointer | |
FDISI | Disable interrupts | 8087 only, otherwise FNOP |
FDIV | Divide | Pentium FDIV bug |
FDIVP | Divide and pop | |
FDIVR | Divide reversed | |
FDIVRP | Divide reversed and pop | |
FENI | Enable interrupts | 8087 only, otherwise FNOP |
FFREE | Free register | |
FIADD | Integer add | |
FICOM | Integer compare | |
FICOMP | Integer compare and pop | |
FIDIV | Integer divide | |
FIDIVR | Integer divide reversed | |
FILD | Load integer | |
FIMUL | Integer multiply | |
FINCSTP | Increment floating point stack pointer | |
FINIT | Initialize floating point processor | |
FIST | Store integer | |
FISTP | Store integer and pop | |
FISUB | Integer subtract | |
FISUBR | Integer subtract reversed | |
FLD | Floating point load | |
FLD1 | Load 1.0 onto stack | |
FLDCW | Load control word | |
FLDENV | Load environment state | |
FLDENVW | Load environment state, 16-bit | |
FLDL2E | Load onto stack | |
FLDL2T | Load onto stack | |
FLDLG2 | Load onto stack | |
FLDLN2 | Load onto stack | |
FLDPI | Load onto stack | |
FLDZ | Load 0.0 onto stack | |
FMUL | Multiply | |
FMULP | Multiply and pop | |
FNCLEX | Clear exceptions, no wait | |
FNDISI | Disable interrupts, no wait | 8087 only, otherwise FNOP |
FNENI | Enable interrupts, no wait | 8087 only, otherwise FNOP |
FNINIT | Initialize floating point processor, no wait | |
FNOP | No operation | |
FNSAVE | Save FPU state, no wait, 8-bit | |
FNSAVEW | Save FPU state, no wait, 16-bit | |
FNSTCW | Store control word, no wait | |
FNSTENV | Store FPU environment, no wait | |
FNSTENVW | Store FPU environment, no wait, 16-bit | |
FNSTSW | Store status word, no wait | |
FPATAN | Partial arctangent | |
FPREM | Partial remainder | |
FPTAN | Partial tangent | |
FRNDINT | Round to integer | |
FRSTOR | Restore saved state | |
FRSTORW | Restore saved state | Perhaps not actually available in 8087 |
FSAVE | Save FPU state | |
FSAVEW | Save FPU state, 16-bit | |
FSCALE | Scale by factor of 2 | |
FSQRT | Square root | |
FST | Floating point store | |
FSTCW | Store control word | |
FSTENV | Store FPU environment | |
FSTENVW | Store FPU environment, 16-bit | |
FSTP | Store and pop | |
FSTSW | Store status word | |
FSUB | Subtract | |
FSUBP | Subtract and pop | |
FSUBR | Reverse subtract | |
FSUBRP | Reverse subtract and pop | |
FTST | Test for zero | |
FWAIT | Wait while FPU is executing | |
FXAM | Examine condition flags | |
FXCH | Exchange registers | |
FXTRACT | Extract exponent and significand | |
FYL2X | | if, then the base- logarithm is computed |
FYL2XP1 | | more precise than if x is close to zero |
Instruction | Opcode | Meaning | Notes |
EMMS | 0F 77 | Empty MMX Technology State | Marks all x87 FPU registers for use by FPU |
MOVD mm, r/m32 | 0F 6E /r | Move doubleword | |
MOVD r/m32, mm | 0F 7E /r | Move doubleword | |
MOVQ mm/m64, mm | 0F 7F /r | Move quadword | |
MOVQ mm, mm/m64 | 0F 6F /r | Move quadword | |
MOVQ mm, r/m64 | REX.W + 0F 6E /r | Move quadword | |
MOVQ r/m64, mm | REX.W + 0F 7E /r | Move quadword | |
PACKSSDW mm1, mm2/m64 | 0F 6B /r | Pack doublewords to words | |
PACKSSWB mm1, mm2/m64 | 0F 63 /r | Pack words to bytes | |
PACKUSWB mm, mm/m64 | 0F 67 /r | Pack words to bytes | |
PADDB mm, mm/m64 | 0F FC /r | Add packed byte integers | |
PADDW mm, mm/m64 | 0F FD /r | Add packed word integers | |
PADDD mm, mm/m64 | 0F FE /r | Add packed doubleword integers | |
PADDQ mm, mm/m64 | 0F D4 /r | Add packed quadword integers | |
PADDSB mm, mm/m64 | 0F EC /r | Add packed signed byte integers and saturate | |
PADDSW mm, mm/m64 | 0F ED /r | Add packed signed word integers and saturate | |
PADDUSB mm, mm/m64 | 0F DC /r | Add packed unsigned byte integers and saturate | |
PADDUSW mm, mm/m64 | 0F DD /r | Add packed unsigned word integers and saturate | |
PAND mm, mm/m64 | 0F DB /r | Bitwise AND | |
PANDN mm, mm/m64 | 0F DF /r | Bitwise AND NOT | |
POR mm, mm/m64 | 0F EB /r | Bitwise OR | |
PXOR mm, mm/m64 | 0F EF /r | Bitwise XOR | |
PCMPEQB mm, mm/m64 | 0F 74 /r | Compare packed bytes for equality | |
PCMPEQW mm, mm/m64 | 0F 75 /r | Compare packed words for equality | |
PCMPEQD mm, mm/m64 | 0F 76 /r | Compare packed doublewords for equality | |
PCMPGTB mm, mm/m64 | 0F 64 /r | Compare packed signed byte integers for greater than | |
PCMPGTW mm, mm/m64 | 0F 65 /r | Compare packed signed word integers for greater than | |
PCMPGTD mm, mm/m64 | 0F 66 /r | Compare packed signed doubleword integers for greater than | |
PMADDWD mm, mm/m64 | 0F F5 /r | Multiply packed words, add adjacent doubleword results | |
PMULHW mm, mm/m64 | 0F E5 /r | Multiply packed signed word integers, store high 16 bits of results | |
PMULLW mm, mm/m64 | 0F D5 /r | Multiply packed signed word integers, store low 16 bits of results | |
PSLLW mm1, imm8 | 0F 71 /6 ib | Shift left words, shift in zeros | |
PSLLW mm, mm/m64 | 0F F1 /r | Shift left words, shift in zeros | |
PSLLD mm, imm8 | 0F 72 /6 ib | Shift left doublewords, shift in zeros | |
PSLLD mm, mm/m64 | 0F F2 /r | Shift left doublewords, shift in zeros | |
PSLLQ mm, imm8 | 0F 73 /6 ib | Shift left quadword, shift in zeros | |
PSLLQ mm, mm/m64 | 0F F3 /r | Shift left quadword, shift in zeros | |
PSRAD mm, imm8 | 0F 72 /4 ib | Shift right doublewords, shift in sign bits | |
PSRAD mm, mm/m64 | 0F E2 /r | Shift right doublewords, shift in sign bits | |
PSRAW mm, imm8 | 0F 71 /4 ib | Shift right words, shift in sign bits | |
PSRAW mm, mm/m64 | 0F E1 /r | Shift right words, shift in sign bits | |
PSRLW mm, imm8 | 0F 71 /2 ib | Shift right words, shift in zeros | |
PSRLW mm, mm/m64 | 0F D1 /r | Shift right words, shift in zeros | |
PSRLD mm, imm8 | 0F 72 /2 ib | Shift right doublewords, shift in zeros | |
PSRLD mm, mm/m64 | 0F D2 /r | Shift right doublewords, shift in zeros | |
PSRLQ mm, imm8 | 0F 73 /2 ib | Shift right quadword, shift in zeros | |
PSRLQ mm, mm/m64 | 0F D3 /r | Shift right quadword, shift in zeros | |
PSUBB mm, mm/m64 | 0F F8 /r | Subtract packed byte integers | |
PSUBW mm, mm/m64 | 0F F9 /r | Subtract packed word integers | |
PSUBD mm, mm/m64 | 0F FA /r | Subtract packed doubleword integers | |
PSUBSB mm, mm/m64 | 0F E8 /r | Subtract signed packed bytes with saturation | |
PSUBSW mm, mm/m64 | 0F E9 /r | Subtract signed packed words with saturation | |
PSUBUSB mm, mm/m64 | 0F D8 /r | Subtract unsigned packed bytes with saturation | |
PSUBUSW mm, mm/m64 | 0F D9 /r | Subtract unsigned packed words with saturation | |
PUNPCKHBW mm, mm/m64 | 0F 68 /r | Unpack and interleave high-order bytes | |
PUNPCKHWD mm, mm/m64 | 0F 69 /r | Unpack and interleave high-order words | |
PUNPCKHDQ mm, mm/m64 | 0F 6A /r | Unpack and interleave high-order doublewords | |
PUNPCKLBW mm, mm/m32 | 0F 60 /r | Unpack and interleave low-order bytes | |
PUNPCKLWD mm, mm/m32 | 0F 61 /r | Unpack and interleave low-order words | |
PUNPCKLDQ mm, mm/m32 | 0F 62 /r | Unpack and interleave low-order doublewords | |
Instruction | Opcode | Meaning |
ANDPS* xmm1, xmm2/m128 | 0F 54 /r | Bitwise Logical AND of Packed Single-Precision Floating-Point Values |
ANDNPS* xmm1, xmm2/m128 | 0F 55 /r | Bitwise Logical AND NOT of Packed Single-Precision Floating-Point Values |
ORPS* xmm1, xmm2/m128 | 0F 56 /r | Bitwise Logical OR of Single-Precision Floating-Point Values |
XORPS* xmm1, xmm2/m128 | 0F 57 /r | Bitwise Logical XOR for Single-Precision Floating-Point Values |
MOVUPS xmm1, xmm2/m128 | 0F 10 /r | Move Unaligned Packed Single-Precision Floating-Point Values |
MOVSS xmm1, xmm2/m32 | F3 0F 10 /r | Move Scalar Single-Precision Floating-Point Values |
MOVUPS xmm2/m128, xmm1 | 0F 11 /r | Move Unaligned Packed Single-Precision Floating-Point Values |
MOVSS xmm2/m32, xmm1 | F3 0F 11 /r | Move Scalar Single-Precision Floating-Point Values |
MOVLPS xmm, m64 | 0F 12 /r | Move Low Packed Single-Precision Floating-Point Values |
MOVHLPS xmm1, xmm2 | 0F 12 /r | Move Packed Single-Precision Floating-Point Values High to Low |
MOVLPS m64, xmm | 0F 13 /r | Move Low Packed Single-Precision Floating-Point Values |
UNPCKLPS xmm1, xmm2/m128 | 0F 14 /r | Unpack and Interleave Low Packed Single-Precision Floating-Point Values |
UNPCKHPS xmm1, xmm2/m128 | 0F 15 /r | Unpack and Interleave High Packed Single-Precision Floating-Point Values |
MOVHPS xmm, m64 | 0F 16 /r | Move High Packed Single-Precision Floating-Point Values |
MOVLHPS xmm1, xmm2 | 0F 16 /r | Move Packed Single-Precision Floating-Point Values Low to High |
MOVHPS m64, xmm | 0F 17 /r | Move High Packed Single-Precision Floating-Point Values |
MOVAPS xmm1, xmm2/m128 | 0F 28 /r | Move Aligned Packed Single-Precision Floating-Point Values |
MOVAPS xmm2/m128, xmm1 | 0F 29 /r | Move Aligned Packed Single-Precision Floating-Point Values |
MOVNTPS m128, xmm1 | 0F 2B /r | Move Aligned Four Packed Single-FP Non Temporal |
MOVMSKPS reg, xmm | 0F 50 /r | Extract Packed Single-Precision Floating-Point 4-bit Sign Mask. The upper bits of the register are filled with zeros. |
CVTPI2PS xmm, mm/m64 | 0F 2A /r | Convert Packed Dword Integers to Packed Single-Precision FP Values |
CVTSI2SS xmm, r/m32 | F3 0F 2A /r | Convert Dword Integer to Scalar Single-Precision FP Value |
CVTSI2SS xmm, r/m64 | F3 REX.W 0F 2A /r | Convert Qword Integer to Scalar Single-Precision FP Value |
MOVNTPS m128, xmm | 0F 2B /r | Store Packed Single-Precision Floating-Point Values Using Non-Temporal Hint |
CVTTPS2PI mm, xmm/m64 | 0F 2C /r | Convert with Truncation Packed Single-Precision FP Values to Packed Dword Integers |
CVTTSS2SI r32, xmm/m32 | F3 0F 2C /r | Convert with Truncation Scalar Single-Precision FP Value to Dword Integer |
CVTTSS2SI r64, xmm1/m32 | F3 REX.W 0F 2C /r | Convert with Truncation Scalar Single-Precision FP Value to Qword Integer |
CVTPS2PI mm, xmm/m64 | 0F 2D /r | Convert Packed Single-Precision FP Values to Packed Dword Integers |
CVTSS2SI r32, xmm/m32 | F3 0F 2D /r | Convert Scalar Single-Precision FP Value to Dword Integer |
CVTSS2SI r64, xmm1/m32 | F3 REX.W 0F 2D /r | Convert Scalar Single-Precision FP Value to Qword Integer |
UCOMISS xmm1, xmm2/m32 | 0F 2E /r | Unordered Compare Scalar Single-Precision Floating-Point Values and Set EFLAGS |
COMISS xmm1, xmm2/m32 | 0F 2F /r | Compare Scalar Ordered Single-Precision Floating-Point Values and Set EFLAGS |
SQRTPS xmm1, xmm2/m128 | 0F 51 /r | Compute Square Roots of Packed Single-Precision Floating-Point Values |
SQRTSS xmm1, xmm2/m32 | F3 0F 51 /r | Compute Square Root of Scalar Single-Precision Floating-Point Value |
RSQRTPS xmm1, xmm2/m128 | 0F 52 /r | Compute Reciprocal of Square Root of Packed Single-Precision Floating-Point Value |
RSQRTSS xmm1, xmm2/m32 | F3 0F 52 /r | Compute Reciprocal of Square Root of Scalar Single-Precision Floating-Point Value |
RCPPS xmm1, xmm2/m128 | 0F 53 /r | Compute Reciprocal of Packed Single-Precision Floating-Point Values |
RCPSS xmm1, xmm2/m32 | F3 0F 53 /r | Compute Reciprocal of Scalar Single-Precision Floating-Point Values |
ADDPS xmm1, xmm2/m128 | 0F 58 /r | Add Packed Single-Precision Floating-Point Values |
ADDSS xmm1, xmm2/m32 | F3 0F 58 /r | Add Scalar Single-Precision Floating-Point Values |
MULPS xmm1, xmm2/m128 | 0F 59 /r | Multiply Packed Single-Precision Floating-Point Values |
MULSS xmm1, xmm2/m32 | F3 0F 59 /r | Multiply Scalar Single-Precision Floating-Point Values |
SUBPS xmm1, xmm2/m128 | 0F 5C /r | Subtract Packed Single-Precision Floating-Point Values |
SUBSS xmm1, xmm2/m32 | F3 0F 5C /r | Subtract Scalar Single-Precision Floating-Point Values |
MINPS xmm1, xmm2/m128 | 0F 5D /r | Return Minimum Packed Single-Precision Floating-Point Values |
MINSS xmm1, xmm2/m32 | F3 0F 5D /r | Return Minimum Scalar Single-Precision Floating-Point Values |
DIVPS xmm1, xmm2/m128 | 0F 5E /r | Divide Packed Single-Precision Floating-Point Values |
DIVSS xmm1, xmm2/m32 | F3 0F 5E /r | Divide Scalar Single-Precision Floating-Point Values |
MAXPS xmm1, xmm2/m128 | 0F 5F /r | Return Maximum Packed Single-Precision Floating-Point Values |
MAXSS xmm1, xmm2/m32 | F3 0F 5F /r | Return Maximum Scalar Single-Precision Floating-Point Values |
LDMXCSR m32 | 0F AE /2 | Load MXCSR Register State |
STMXCSR m32 | 0F AE /3 | Store MXCSR Register State |
CMPPS xmm1, xmm2/m128, imm8 | 0F C2 /r ib | Compare Packed Single-Precision Floating-Point Values |
CMPSS xmm1, xmm2/m32, imm8 | F3 0F C2 /r ib | Compare Scalar Single-Precision Floating-Point Values |
SHUFPS xmm1, xmm2/m128, imm8 | 0F C6 /r ib | Shuffle Packed Single-Precision Floating-Point Values |
Instruction | Opcode | Meaning |
MOVD xmm, r/m32 | 66 0F 6E /r | Move doubleword |
MOVD r/m32, xmm | 66 0F 7E /r | Move doubleword |
MOVQ xmm1, xmm2/m64 | F3 0F 7E /r | Move quadword |
MOVQ xmm2/m64, xmm1 | 66 0F D6 /r | Move quadword |
MOVQ r/m64, xmm | 66 REX.W 0F 7E /r | Move quadword |
MOVQ xmm, r/m64 | 66 REX.W 0F 6E /r | Move quadword |
PMOVMSKB reg, xmm | 66 0F D7 /r | Move a byte mask, zeroing the upper bits of the register |
PEXTRW reg, xmm, imm8 | 66 0F C5 /r ib | Extract specified word and move it to reg, setting bits 15-0 and zeroing the rest |
PINSRW xmm, r32/m16, imm8 | 66 0F C4 /r ib | Move low word at the specified word position |
PACKSSDW xmm1, xmm2/m128 | 66 0F 6B /r | Converts 4 packed signed doubleword integers into 8 packed signed word integers with saturation |
PACKSSWB xmm1, xmm2/m128 | 66 0F 63 /r | Converts 8 packed signed word integers into 16 packed signed byte integers with saturation |
PACKUSWB xmm1, xmm2/m128 | 66 0F 67 /r | Converts 8 signed word integers into 16 unsigned byte integers with saturation |
PADDB xmm1, xmm2/m128 | 66 0F FC /r | Add packed byte integers |
PADDW xmm1, xmm2/m128 | 66 0F FD /r | Add packed word integers |
PADDD xmm1, xmm2/m128 | 66 0F FE /r | Add packed doubleword integers |
PADDQ xmm1, xmm2/m128 | 66 0F D4 /r | Add packed quadword integers. |
PADDSB xmm1, xmm2/m128 | 66 0F EC /r | Add packed signed byte integers with saturation |
PADDSW xmm1, xmm2/m128 | 66 0F ED /r | Add packed signed word integers with saturation |
PADDUSB xmm1, xmm2/m128 | 66 0F DC /r | Add packed unsigned byte integers with saturation |
PADDUSW xmm1, xmm2/m128 | 66 0F DD /r | Add packed unsigned word integers with saturation |
PAND xmm1, xmm2/m128 | 66 0F DB /r | Bitwise AND |
PANDN xmm1, xmm2/m128 | 66 0F DF /r | Bitwise AND NOT |
POR xmm1, xmm2/m128 | 66 0F EB /r | Bitwise OR |
PXOR xmm1, xmm2/m128 | 66 0F EF /r | Bitwise XOR |
PCMPEQB xmm1, xmm2/m128 | 66 0F 74 /r | Compare packed bytes for equality. |
PCMPEQW xmm1, xmm2/m128 | 66 0F 75 /r | Compare packed words for equality. |
PCMPEQD xmm1, xmm2/m128 | 66 0F 76 /r | Compare packed doublewords for equality. |
PCMPGTB xmm1, xmm2/m128 | 66 0F 64 /r | Compare packed signed byte integers for greater than |
PCMPGTW xmm1, xmm2/m128 | 66 0F 65 /r | Compare packed signed word integers for greater than |
PCMPGTD xmm1, xmm2/m128 | 66 0F 66 /r | Compare packed signed doubleword integers for greater than |
PMULLW xmm1, xmm2/m128 | 66 0F D5 /r | Multiply packed signed word integers with saturation |
PMULHW xmm1, xmm2/m128 | 66 0F E5 /r | Multiply the packed signed word integers, store the high 16 bits of the results |
PMULHUW xmm1, xmm2/m128 | 66 0F E4 /r | Multiply packed unsigned word integers, store the high 16 bits of the results |
PMULUDQ xmm1, xmm2/m128 | 66 0F F4 /r | Multiply packed unsigned doubleword integers |
PSLLW xmm1, xmm2/m128 | 66 0F F1 /r | Shift words left while shifting in 0s |
PSLLW xmm1, imm8 | 66 0F 71 /6 ib | Shift words left while shifting in 0s |
PSLLD xmm1, xmm2/m128 | 66 0F F2 /r | Shift doublewords left while shifting in 0s |
PSLLD xmm1, imm8 | 66 0F 72 /6 ib | Shift doublewords left while shifting in 0s |
PSLLQ xmm1, xmm2/m128 | 66 0F F3 /r | Shift quadwords left while shifting in 0s |
PSLLQ xmm1, imm8 | 66 0F 73 /6 ib | Shift quadwords left while shifting in 0s |
PSRAD xmm1, xmm2/m128 | 66 0F E2 /r | Shift doubleword right while shifting in sign bits |
PSRAD xmm1, imm8 | 66 0F 72 /4 ib | Shift doublewords right while shifting in sign bits |
PSRAW xmm1, xmm2/m128 | 66 0F E1 /r | Shift words right while shifting in sign bits |
PSRAW xmm1, imm8 | 66 0F 71 /4 ib | Shift words right while shifting in sign bits |
PSRLW xmm1, xmm2/m128 | 66 0F D1 /r | Shift words right while shifting in 0s |
PSRLW xmm1, imm8 | 66 0F 71 /2 ib | Shift words right while shifting in 0s |
PSRLD xmm1, xmm2/m128 | 66 0F D2 /r | Shift doublewords right while shifting in 0s |
PSRLD xmm1, imm8 | 66 0F 72 /2 ib | Shift doublewords right while shifting in 0s |
PSRLQ xmm1, xmm2/m128 | 66 0F D3 /r | Shift quadwords right while shifting in 0s |
PSRLQ xmm1, imm8 | 66 0F 73 /2 ib | Shift quadwords right while shifting in 0s |
PSUBB xmm1, xmm2/m128 | 66 0F F8 /r | Subtract packed byte integers |
PSUBW xmm1, xmm2/m128 | 66 0F F9 /r | Subtract packed word integers |
PSUBD xmm1, xmm2/m128 | 66 0F FA /r | Subtract packed doubleword integers |
PSUBQ xmm1, xmm2/m128 | 66 0F FB /r | Subtract packed quadword integers. |
PSUBSB xmm1, xmm2/m128 | 66 0F E8 /r | Subtract packed signed byte integers with saturation |
PSUBSW xmm1, xmm2/m128 | 66 0F E9 /r | Subtract packed signed word integers with saturation |
PMADDWD xmm1, xmm2/m128 | 66 0F F5 /r | Multiply the packed word integers, add adjacent doubleword results |
PSUBUSB xmm1, xmm2/m128 | 66 0F D8 /r | Subtract packed unsigned byte integers with saturation |
PSUBUSW xmm1, xmm2/m128 | 66 0F D9 /r | Subtract packed unsigned word integers with saturation |
PUNPCKHBW xmm1, xmm2/m128 | 66 0F 68 /r | Unpack and interleave high-order bytes |
PUNPCKHWD xmm1, xmm2/m128 | 66 0F 69 /r | Unpack and interleave high-order words |
PUNPCKHDQ xmm1, xmm2/m128 | 66 0F 6A /r | Unpack and interleave high-order doublewords |
PUNPCKLBW xmm1, xmm2/m128 | 66 0F 60 /r | Interleave low-order bytes |
PUNPCKLWD xmm1, xmm2/m128 | 66 0F 61 /r | Interleave low-order words |
PUNPCKLDQ xmm1, xmm2/m128 | 66 0F 62 /r | Interleave low-order doublewords |
PAVGB xmm1, xmm2/m128 | 66 0F E0, /r | Average packed unsigned byte integers with rounding |
PAVGW xmm1, xmm2/m128 | 66 0F E3 /r | Average packed unsigned word integers with rounding |
PMINUB xmm1, xmm2/m128 | 66 0F DA /r | Compare packed unsigned byte integers and store packed minimum values |
PMINSW xmm1, xmm2/m128 | 66 0F EA /r | Compare packed signed word integers and store packed minimum values |
PMAXSW xmm1, xmm2/m128 | 66 0F EE /r | Compare packed signed word integers and store maximum packed values |
PMAXUB xmm1, xmm2/m128 | 66 0F DE /r | Compare packed unsigned byte integers and store packed maximum values |
PSADBW xmm1, xmm2/m128 | 66 0F F6 /r | Computes the absolute differences of the packed unsigned byte integers; the 8 low differences and 8 high differences are then summed separately to produce two unsigned word integer results |
Instruction | Opcode | Meaning |
DPPS xmm1, xmm2/m128, imm8 | 66 0F 3A 40 /r ib | Selectively multiply packed SP floating-point values, add and selectively store |
DPPD xmm1, xmm2/m128, imm8 | 66 0F 3A 41 /r ib | Selectively multiply packed DP floating-point values, add and selectively store |
BLENDPS xmm1, xmm2/m128, imm8 | 66 0F 3A 0C /r ib | Select packed single precision floating-point values from specified mask |
BLENDVPS xmm1, xmm2/m128, <XMM0> | 66 0F 38 14 /r | Select packed single precision floating-point values from specified mask |
BLENDPD xmm1, xmm2/m128, imm8 | 66 0F 3A 0D /r ib | Select packed DP-FP values from specified mask |
BLENDVPD xmm1, xmm2/m128, <XMM0> | 66 0F 38 15 /r | Select packed DP FP values from specified mask |
ROUNDPS xmm1, xmm2/m128, imm8 | 66 0F 3A 08 /r ib | Round packed single precision floating-point values |
ROUNDSS xmm1, xmm2/m32, imm8 | 66 0F 3A 0A /r ib | Round the low packed single precision floating-point value |
ROUNDPD xmm1, xmm2/m128, imm8 | 66 0F 3A 09 /r ib | Round packed double precision floating-point values |
ROUNDSD xmm1, xmm2/m64, imm8 | 66 0F 3A 0B /r ib | Round the low packed double precision floating-point value |
INSERTPS xmm1, xmm2/m32, imm8 | 66 0F 3A 21 /r ib | Insert a selected single-precision floating-point value at the specified destination element and zero out destination elements |
EXTRACTPS reg/m32, xmm1, imm8 | 66 0F 3A 17 /r ib | Extract one single-precision floating-point value at specified offset and store the result |
Instruction | Opcode | Meaning |
MPSADBW xmm1, xmm2/m128, imm8 | 66 0F 3A 42 /r ib | Sums absolute 8-bit integer difference of adjacent groups of 4 byte integers with starting offset |
PHMINPOSUW xmm1, xmm2/m128 | 66 0F 38 41 /r | Find the minimum unsigned word |
PMULLD xmm1, xmm2/m128 | 66 0F 38 40 /r | Multiply the packed dword signed integers and store the low 32 bits |
PMULDQ xmm1, xmm2/m128 | 66 0F 38 28 /r | Multiply packed signed doubleword integers and store quadword result |
PBLENDVB xmm1, xmm2/m128, <XMM0> | 66 0F 38 10 /r | Select byte values from specified mask |
PBLENDW xmm1, xmm2/m128, imm8 | 66 0F 3A 0E /r ib | Select words from specified mask |
PMINSB xmm1, xmm2/m128 | 66 0F 38 38 /r | Compare packed signed byte integers |
PMINUW xmm1, xmm2/m128 | 66 0F 38 3A/r | Compare packed unsigned word integers |
PMINSD xmm1, xmm2/m128 | 66 0F 38 39 /r | Compare packed signed dword integers |
PMINUD xmm1, xmm2/m128 | 66 0F 38 3B /r | Compare packed unsigned dword integers |
PMAXSB xmm1, xmm2/m128 | 66 0F 38 3C /r | Compare packed signed byte integers |
PMAXUW xmm1, xmm2/m128 | 66 0F 38 3E/r | Compare packed unsigned word integers |
PMAXSD xmm1, xmm2/m128 | 66 0F 38 3D /r | Compare packed signed dword integers |
PMAXUD xmm1, xmm2/m128 | 66 0F 38 3F /r | Compare packed unsigned dword integers |
PINSRB xmm1, r32/m8, imm8 | 66 0F 3A 20 /r ib | Insert a byte integer value at specified destination element |
PINSRD xmm1, r/m32, imm8 | 66 0F 3A 22 /r ib | Insert a dword integer value at specified destination element |
PINSRQ xmm1, r/m64, imm8 | 66 REX.W 0F 3A 22 /r ib | Insert a qword integer value at specified destination element |
PEXTRB reg/m8, xmm2, imm8 | 66 0F 3A 14 /r ib | Extract a byte integer value at source byte offset, upper bits are zeroed. |
PEXTRW reg/m16, xmm, imm8 | 66 0F 3A 15 /r ib | Extract word and copy to lowest 16 bits, zero-extended |
PEXTRD r/m32, xmm2, imm8 | 66 0F 3A 16 /r ib | Extract a dword integer value at source dword offset |
PEXTRQ r/m64, xmm2, imm8 | 66 REX.W 0F 3A 16 /r ib | Extract a qword integer value at source qword offset |
PMOVSXBW xmm1, xmm2/m64 | 66 0f 38 20 /r | Sign extend 8 packed 8-bit integers to 8 packed 16-bit integers |
PMOVZXBW xmm1, xmm2/m64 | 66 0f 38 30 /r | Zero extend 8 packed 8-bit integers to 8 packed 16-bit integers |
PMOVSXBD xmm1, xmm2/m32 | 66 0f 38 21 /r | Sign extend 4 packed 8-bit integers to 4 packed 32-bit integers |
PMOVZXBD xmm1, xmm2/m32 | 66 0f 38 31 /r | Zero extend 4 packed 8-bit integers to 4 packed 32-bit integers |
PMOVSXBQ xmm1, xmm2/m16 | 66 0f 38 22 /r | Sign extend 2 packed 8-bit integers to 2 packed 64-bit integers |
PMOVZXBQ xmm1, xmm2/m16 | 66 0f 38 32 /r | Zero extend 2 packed 8-bit integers to 2 packed 64-bit integers |
PMOVSXWD xmm1, xmm2/m64 | 66 0f 38 23/r | Sign extend 4 packed 16-bit integers to 4 packed 32-bit integers |
PMOVZXWD xmm1, xmm2/m64 | 66 0f 38 33 /r | Zero extend 4 packed 16-bit integers to 4 packed 32-bit integers |
PMOVSXWQ xmm1, xmm2/m32 | 66 0f 38 24 /r | Sign extend 2 packed 16-bit integers to 2 packed 64-bit integers |
PMOVZXWQ xmm1, xmm2/m32 | 66 0f 38 34 /r | Zero extend 2 packed 16-bit integers to 2 packed 64-bit integers |
PMOVSXDQ xmm1, xmm2/m64 | 66 0f 38 25 /r | Sign extend 2 packed 32-bit integers to 2 packed 64-bit integers |
PMOVZXDQ xmm1, xmm2/m64 | 66 0f 38 35 /r | Zero extend 2 packed 32-bit integers to 2 packed 64-bit integers |
PTEST xmm1, xmm2/m128 | 66 0F 38 17 /r | Set ZF if AND result is all 0s, set CF if AND NOT result is all 0s |
PCMPEQQ xmm1, xmm2/m128 | 66 0F 38 29 /r | Compare packed qwords for equality |
PACKUSDW xmm1, xmm2/m128 | 66 0F 38 2B /r | Convert 2 × 4 packed signed doubleword integers into 8 packed unsigned word integers with saturation |
MOVNTDQA xmm1, m128 | 66 0F 38 2A /r | Move double quadword using non-temporal hint if WC memory type |
Instruction | Meaning |
VFMADD132PD | Fused Multiply-Add of Packed Double-Precision Floating-Point Values |
VFMADD213PD | Fused Multiply-Add of Packed Double-Precision Floating-Point Values |
VFMADD231PD | Fused Multiply-Add of Packed Double-Precision Floating-Point Values |
VFMADD132PS | Fused Multiply-Add of Packed Single-Precision Floating-Point Values |
VFMADD213PS | Fused Multiply-Add of Packed Single-Precision Floating-Point Values |
VFMADD231PS | Fused Multiply-Add of Packed Single-Precision Floating-Point Values |
VFMADD132SD | Fused Multiply-Add of Scalar Double-Precision Floating-Point Values |
VFMADD213SD | Fused Multiply-Add of Scalar Double-Precision Floating-Point Values |
VFMADD231SD | Fused Multiply-Add of Scalar Double-Precision Floating-Point Values |
VFMADD132SS | Fused Multiply-Add of Scalar Single-Precision Floating-Point Values |
VFMADD213SS | Fused Multiply-Add of Scalar Single-Precision Floating-Point Values |
VFMADD231SS | Fused Multiply-Add of Scalar Single-Precision Floating-Point Values |
VFMADDSUB132PD | Fused Multiply-Alternating Add/Subtract of Packed Double-Precision Floating-Point Values |
VFMADDSUB213PD | Fused Multiply-Alternating Add/Subtract of Packed Double-Precision Floating-Point Values |
VFMADDSUB231PD | Fused Multiply-Alternating Add/Subtract of Packed Double-Precision Floating-Point Values |
VFMADDSUB132PS | Fused Multiply-Alternating Add/Subtract of Packed Single-Precision Floating-Point Values |
VFMADDSUB213PS | Fused Multiply-Alternating Add/Subtract of Packed Single-Precision Floating-Point Values |
VFMADDSUB231PS | Fused Multiply-Alternating Add/Subtract of Packed Single-Precision Floating-Point Values |
VFMSUB132PD | Fused Multiply-Subtract of Packed Double-Precision Floating-Point Values |
VFMSUB213PD | Fused Multiply-Subtract of Packed Double-Precision Floating-Point Values |
VFMSUB231PD | Fused Multiply-Subtract of Packed Double-Precision Floating-Point Values |
VFMSUB132PS | Fused Multiply-Subtract of Packed Single-Precision Floating-Point Values |
VFMSUB213PS | Fused Multiply-Subtract of Packed Single-Precision Floating-Point Values |
VFMSUB231PS | Fused Multiply-Subtract of Packed Single-Precision Floating-Point Values |
VFMSUB132SD | Fused Multiply-Subtract of Scalar Double-Precision Floating-Point Values |
VFMSUB213SD | Fused Multiply-Subtract of Scalar Double-Precision Floating-Point Values |
VFMSUB231SD | Fused Multiply-Subtract of Scalar Double-Precision Floating-Point Values |
VFMSUB132SS | Fused Multiply-Subtract of Scalar Single-Precision Floating-Point Values |
VFMSUB213SS | Fused Multiply-Subtract of Scalar Single-Precision Floating-Point Values |
VFMSUB231SS | Fused Multiply-Subtract of Scalar Single-Precision Floating-Point Values |
VFMSUBADD132PD | Fused Multiply-Alternating Subtract/Add of Packed Double-Precision Floating-Point Values |
VFMSUBADD213PD | Fused Multiply-Alternating Subtract/Add of Packed Double-Precision Floating-Point Values |
VFMSUBADD231PD | Fused Multiply-Alternating Subtract/Add of Packed Double-Precision Floating-Point Values |
VFMSUBADD132PS | Fused Multiply-Alternating Subtract/Add of Packed Single-Precision Floating-Point Values |
VFMSUBADD213PS | Fused Multiply-Alternating Subtract/Add of Packed Single-Precision Floating-Point Values |
VFMSUBADD231PS | Fused Multiply-Alternating Subtract/Add of Packed Single-Precision Floating-Point Values |
VFNMADD132PD | Fused Negative Multiply-Add of Packed Double-Precision Floating-Point Values |
VFNMADD213PD | Fused Negative Multiply-Add of Packed Double-Precision Floating-Point Values |
VFNMADD231PD | Fused Negative Multiply-Add of Packed Double-Precision Floating-Point Values |
VFNMADD132PS | Fused Negative Multiply-Add of Packed Single-Precision Floating-Point Values |
VFNMADD213PS | Fused Negative Multiply-Add of Packed Single-Precision Floating-Point Values |
VFNMADD231PS | Fused Negative Multiply-Add of Packed Single-Precision Floating-Point Values |
VFNMADD132SD | Fused Negative Multiply-Add of Scalar Double-Precision Floating-Point Values |
VFNMADD213SD | Fused Negative Multiply-Add of Scalar Double-Precision Floating-Point Values |
VFNMADD231SD | Fused Negative Multiply-Add of Scalar Double-Precision Floating-Point Values |
VFNMADD132SS | Fused Negative Multiply-Add of Scalar Single-Precision Floating-Point Values |
VFNMADD213SS | Fused Negative Multiply-Add of Scalar Single-Precision Floating-Point Values |
VFNMADD231SS | Fused Negative Multiply-Add of Scalar Single-Precision Floating-Point Values |
VFNMSUB132PD | Fused Negative Multiply-Subtract of Packed Double-Precision Floating-Point Values |
VFNMSUB213PD | Fused Negative Multiply-Subtract of Packed Double-Precision Floating-Point Values |
VFNMSUB231PD | Fused Negative Multiply-Subtract of Packed Double-Precision Floating-Point Values |
VFNMSUB132PS | Fused Negative Multiply-Subtract of Packed Single-Precision Floating-Point Values |
VFNMSUB213PS | Fused Negative Multiply-Subtract of Packed Single-Precision Floating-Point Values |
VFNMSUB231PS | Fused Negative Multiply-Subtract of Packed Single-Precision Floating-Point Values |
VFNMSUB132SD | Fused Negative Multiply-Subtract of Scalar Double-Precision Floating-Point Values |
VFNMSUB213SD | Fused Negative Multiply-Subtract of Scalar Double-Precision Floating-Point Values |
VFNMSUB231SD | Fused Negative Multiply-Subtract of Scalar Double-Precision Floating-Point Values |
VFNMSUB132SS | Fused Negative Multiply-Subtract of Scalar Single-Precision Floating-Point Values |
VFNMSUB213SS | Fused Negative Multiply-Subtract of Scalar Single-Precision Floating-Point Values |
VFNMSUB231SS | Fused Negative Multiply-Subtract of Scalar Single-Precision Floating-Point Values |
Instruction | Opcode | Meaning | Notes |
VFMADDPD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 69 /r /is4 | Fused Multiply-Add of Packed Double-Precision Floating-Point Values | |
VFMADDPS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 68 /r /is4 | Fused Multiply-Add of Packed Single-Precision Floating-Point Values | |
VFMADDSD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 6B /r /is4 | Fused Multiply-Add of Scalar Double-Precision Floating-Point Values | |
VFMADDSS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 6A /r /is4 | Fused Multiply-Add of Scalar Single-Precision Floating-Point Values | |
VFMADDSUBPD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 5D /r /is4 | Fused Multiply-Alternating Add/Subtract of Packed Double-Precision Floating-Point Values | |
VFMADDSUBPS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 5C /r /is4 | Fused Multiply-Alternating Add/Subtract of Packed Single-Precision Floating-Point Values | |
VFMSUBADDPD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 5F /r /is4 | Fused Multiply-Alternating Subtract/Add of Packed Double-Precision Floating-Point Values | |
VFMSUBADDPS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 5E /r /is4 | Fused Multiply-Alternating Subtract/Add of Packed Single-Precision Floating-Point Values | |
VFMSUBPD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 6D /r /is4 | Fused Multiply-Subtract of Packed Double-Precision Floating-Point Values | |
VFMSUBPS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 6C /r /is4 | Fused Multiply-Subtract of Packed Single-Precision Floating-Point Values | |
VFMSUBSD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 6F /r /is4 | Fused Multiply-Subtract of Scalar Double-Precision Floating-Point Values | |
VFMSUBSS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 6E /r /is4 | Fused Multiply-Subtract of Scalar Single-Precision Floating-Point Values | |
VFNMADDPD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 79 /r /is4 | Fused Negative Multiply-Add of Packed Double-Precision Floating-Point Values | |
VFNMADDPS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 78 /r /is4 | Fused Negative Multiply-Add of Packed Single-Precision Floating-Point Values | |
VFNMADDSD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 7B /r /is4 | Fused Negative Multiply-Add of Scalar Double-Precision Floating-Point Values | |
VFNMADDSS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 7A /r /is4 | Fused Negative Multiply-Add of Scalar Single-Precision Floating-Point Values | |
VFNMSUBPD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 7D /r /is4 | Fused Negative Multiply-Subtract of Packed Double-Precision Floating-Point Values | |
VFNMSUBPS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 7C /r /is4 | Fused Negative Multiply-Subtract of Packed Single-Precision Floating-Point Values | |
VFNMSUBSD xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 7F /r /is4 | Fused Negative Multiply-Subtract of Scalar Double-Precision Floating-Point Values | |
VFNMSUBSS xmm0, xmm1, xmm2, xmm3 | C4E3 WvvvvL01 7E /r /is4 | Fused Negative Multiply-Subtract of Scalar Single-Precision Floating-Point Values | |
Instruction | Description |
VBROADCASTSS | Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version however, but the same effect can be simply achieved using VINSERTF128. |
VBROADCASTSD | Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version however, but the same effect can be simply achieved using VINSERTF128. |
VPBROADCASTB | Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |
VPBROADCASTW | Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |
VPBROADCASTD | Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |
VPBROADCASTQ | Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |
VBROADCASTI128 | Copy a 128-bit memory operand to all elements of a YMM vector register. |
VINSERTI128 | Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |
VEXTRACTI128 | Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |
VGATHERDPD | Gathers single or double precision floating point values using either 32 or 64-bit indices and scale. |
VGATHERQPD | Gathers single or double precision floating point values using either 32 or 64-bit indices and scale. |
VGATHERDPS | Gathers single or double precision floating point values using either 32 or 64-bit indices and scale. |
VGATHERQPS | Gathers single or double precision floating point values using either 32 or 64-bit indices and scale. |
VPGATHERDD | Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |
VPGATHERDQ | Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |
VPGATHERQD | Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |
VPGATHERQQ | Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |
VPMASKMOVD | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. |
VPMASKMOVQ | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. |
VPERMPS | Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERMD | Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERMPD | Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERMQ | Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERM2I128 | Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |
VPBLENDD | Doubleword immediate version of the PBLEND instructions from SSE4. |
VPSLLVD | Shift left logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSLLVQ | Shift left logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRLVD | Shift right logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRLVQ | Shift right logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRAVD | Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input. |
Instruction | Description |
VBLENDMPD | Blend float64 vectors using opmask control |
VBLENDMPS | Blend float32 vectors using opmask control |
VPBLENDMD | Blend int32 vectors using opmask control |
VPBLENDMQ | Blend int64 vectors using opmask control |
VPCMPD | Compare signed/unsigned doublewords into mask |
VPCMPUD | Compare signed/unsigned doublewords into mask |
VPCMPQ | Compare signed/unsigned quadwords into mask |
VPCMPUQ | Compare signed/unsigned quadwords into mask |
VPTESTMD | Logical AND and set mask for 32 or 64 bit integers. |
VPTESTMQ | Logical AND and set mask for 32 or 64 bit integers. |
VPTESTNMD | Logical NAND and set mask for 32 or 64 bit integers. |
VPTESTNMQ | Logical NAND and set mask for 32 or 64 bit integers. |
VCOMPRESSPD | Store sparse packed double/single-precision floating-point values into dense memory |
VCOMPRESSPS | Store sparse packed double/single-precision floating-point values into dense memory |
VPCOMPRESSD | Store sparse packed doubleword/quadword integer values into dense memory/register |
VPCOMPRESSQ | Store sparse packed doubleword/quadword integer values into dense memory/register |
VEXPANDPD | Load sparse packed double/single-precision floating-point values from dense memory |
VEXPANDPS | Load sparse packed double/single-precision floating-point values from dense memory |
VPEXPANDD | Load sparse packed doubleword/quadword integer values from dense memory/register |
VPEXPANDQ | Load sparse packed doubleword/quadword integer values from dense memory/register |
VPERMI2PD | Full single/double floating point permute overwriting the index. |
VPERMI2PS | Full single/double floating point permute overwriting the index. |
VPERMI2D | Full doubleword/quadword permute overwriting the index. |
VPERMI2Q | Full doubleword/quadword permute overwriting the index. |
VPERMT2PS | Full single/double floating point permute overwriting first source. |
VPERMT2PD | Full single/double floating point permute overwriting first source. |
VPERMT2D | Full doubleword/quadword permute overwriting first source. |
VPERMT2Q | Full doubleword/quadword permute overwriting first source. |
VSHUFF32x4 | Shuffle four packed 128-bit lines. |
VSHUFF64x2 | Shuffle four packed 128-bit lines. |
VSHUFFI32x4 | Shuffle four packed 128-bit lines. |
VSHUFFI64x2 | Shuffle four packed 128-bit lines. |
VPTERNLOGD | Bitwise Ternary Logic |
VPTERNLOGQ | Bitwise Ternary Logic |
VPMOVQD | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVSQD | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVUSQD | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVQW | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVSQW | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVUSQW | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVQB | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVSQB | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVUSQB | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVDW | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVSDW | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVUSDW | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVDB | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVSDB | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VPMOVUSDB | Down convert quadword or doubleword to doubleword, word or byte; unsaturated, saturated or saturated unsigned. The reverse of the sign/zero extend instructions from SSE4.1. |
VCVTPS2UDQ | Convert with or without truncation, packed single or double-precision floating point to packed unsigned doubleword integers. |
VCVTPD2UDQ | Convert with or without truncation, packed single or double-precision floating point to packed unsigned doubleword integers. |
VCVTTPS2UDQ | Convert with or without truncation, packed single or double-precision floating point to packed unsigned doubleword integers. |
VCVTTPD2UDQ | Convert with or without truncation, packed single or double-precision floating point to packed unsigned doubleword integers. |
VCVTSS2USI | Convert with or without trunction, scalar single or double-precision floating point to unsigned doubleword integer. |
VCVTSD2USI | Convert with or without trunction, scalar single or double-precision floating point to unsigned doubleword integer. |
VCVTTSS2USI | Convert with or without trunction, scalar single or double-precision floating point to unsigned doubleword integer. |
VCVTTSD2USI | Convert with or without trunction, scalar single or double-precision floating point to unsigned doubleword integer. |
VCVTUDQ2PS | Convert packed unsigned doubleword integers to packed single or double-precision floating point. |
VCVTUDQ2PD | Convert packed unsigned doubleword integers to packed single or double-precision floating point. |
VCVTUSI2PS | Convert scalar unsigned doubleword integers to single or double-precision floating point. |
VCVTUSI2PD | Convert scalar unsigned doubleword integers to single or double-precision floating point. |
VCVTUSI2SD | Convert scalar unsigned integers to single or double-precision floating point. |
VCVTUSI2SS | Convert scalar unsigned integers to single or double-precision floating point. |
VCVTQQ2PD | Convert packed quadword integers to packed single or double-precision floating point. |
VCVTQQ2PS | Convert packed quadword integers to packed single or double-precision floating point. |
VGETEXPPD | Convert exponents of packed fp values into fp values |
VGETEXPPS | Convert exponents of packed fp values into fp values |
VGETEXPSD | Convert exponent of scalar fp value into fp value |
VGETEXPSS | Convert exponent of scalar fp value into fp value |
VGETMANTPD | Extract vector of normalized mantissas from float32/float64 vector |
VGETMANTPS | Extract vector of normalized mantissas from float32/float64 vector |
VGETMANTSD | Extract float32/float64 of normalized mantissa from float32/float64 scalar |
VGETMANTSS | Extract float32/float64 of normalized mantissa from float32/float64 scalar |
VFIXUPIMMPD | Fix up special packed float32/float64 values |
VFIXUPIMMPS | Fix up special packed float32/float64 values |
VFIXUPIMMSD | Fix up special scalar float32/float64 value |
VFIXUPIMMSS | Fix up special scalar float32/float64 value |
VRCP14PD | Compute approximate reciprocals of packed float32/float64 values |
VRCP14PS | Compute approximate reciprocals of packed float32/float64 values |
VRCP14SD | Compute approximate reciprocals of scalar float32/float64 value |
VRCP14SS | Compute approximate reciprocals of scalar float32/float64 value |
VRNDSCALEPS | Round packed float32/float64 values to include a given number of fraction bits |
VRNDSCALEPD | Round packed float32/float64 values to include a given number of fraction bits |
VRNDSCALESS | Round scalar float32/float64 value to include a given number of fraction bits |
VRNDSCALESD | Round scalar float32/float64 value to include a given number of fraction bits |
VRSQRT14PD | Compute approximate reciprocals of square roots of packed float32/float64 values |
VRSQRT14PS | Compute approximate reciprocals of square roots of packed float32/float64 values |
VRSQRT14SD | Compute approximate reciprocal of square root of scalar float32/float64 value |
VRSQRT14SS | Compute approximate reciprocal of square root of scalar float32/float64 value |
VSCALEFPS | Scale packed float32/float64 values with float32/float64 values |
VSCALEFPD | Scale packed float32/float64 values with float32/float64 values |
VSCALEFSS | Scale scalar float32/float64 value with float32/float64 value |
VSCALEFSD | Scale scalar float32/float64 value with float32/float64 value |
VALIGND | Align doubleword or quadword vectors |
VALIGNQ | Align doubleword or quadword vectors |
VPABSQ | Packed absolute value quadword |
VPMAXSQ | Maximum of packed signed/unsigned quadword |
VPMAXUQ | Maximum of packed signed/unsigned quadword |
VPMINSQ | Minimum of packed signed/unsigned quadword |
VPMINUQ | Minimum of packed signed/unsigned quadword |
VPROLD | Bit rotate left or right |
VPROLVD | Bit rotate left or right |
VPROLQ | Bit rotate left or right |
VPROLVQ | Bit rotate left or right |
VPRORD | Bit rotate left or right |
VPRORVD | Bit rotate left or right |
VPRORQ | Bit rotate left or right |
VPRORVQ | Bit rotate left or right |
VPSCATTERDD | Scatter packed doubleword/quadword with signed doubleword and quadword indices |
VPSCATTERDQ | Scatter packed doubleword/quadword with signed doubleword and quadword indices |
VPSCATTERQD | Scatter packed doubleword/quadword with signed doubleword and quadword indices |
VPSCATTERQQ | Scatter packed doubleword/quadword with signed doubleword and quadword indices |
VSCATTERDPS | Scatter packed float32/float64 with signed doubleword and quadword indices |
VSCATTERDPD | Scatter packed float32/float64 with signed doubleword and quadword indices |
VSCATTERQPS | Scatter packed float32/float64 with signed doubleword and quadword indices |
VSCATTERQPD | Scatter packed float32/float64 with signed doubleword and quadword indices |