IOPS


Input/output operations per second is an input/output performance measurement used to characterize computer storage devices like hard disk drives, solid state drives, and storage area networks. Like benchmarks, IOPS numbers published by storage device manufacturers do not directly relate to real-world application performance.

Background

To meaningfully describe the performance characteristics of any storage device, it is necessary to specify a minimum of three metrics simultaneously: IOPS, response time, and workload. Absent simultaneous specifications of response-time and workload, IOPS are essentially meaningless. In isolation, IOPS can be considered analogous to "revolutions per minute" of an automobile engine i.e. an engine capable of spinning at 10,000 RPMs with its transmission in neutral does not convey anything of value, however an engine capable of developing specified torque and horsepower at a given number of RPMs fully describes the capabilities of the engine.
The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes. There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.

Performance characteristics

The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a contiguous manner and are generally associated with large data transfer sizes, e.g. 128 kB. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g. 4kB.
The most common performance characteristics are as follows:
MeasurementDescription
Total IOPSTotal number of I/O operations per second
Random Read IOPSAverage number of random read I/O operations per second
Random Write IOPSAverage number of random write I/O operations per second
Sequential Read IOPSAverage number of sequential read I/O operations per second
Sequential Write IOPSAverage number of sequential write I/O operations per second

For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device's random seek time, whereas, for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device's internal controller and memory interface speeds. On both types of storage devices, the sequential IOPS numbers typically indicate the maximum sustained bandwidth that the storage device can handle. Often sequential IOPS are reported as a simple MB/s number as follows:


Some HDDs will improve in performance as the number of outstanding IOs increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing or Native Command Queuing. Most commodity SATA drives either cannot do this, or their implementation is so poor that no performance benefit can be seen. Enterprise class SATA drives, such as the Western Digital Raptor and Seagate Barracuda NL will improve by nearly 100% with deep queues. High-end SCSI drives more commonly found in servers, generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS—more than doubling its performance.
While traditional HDDs have about the same IOPS for read and write operations, most NAND flash-based SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called garbage collection. This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.
Newer flash SSDs, such as the Intel X25-E, have much higher IOPS than traditional HDD. In a test done by Xssist, using IOmeter, 4 KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 and 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run. Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the OCZ RevoDrive 3 x2 PCIe using the SandForce controller, have shown much higher sustained write performance that more closely matches the read speed.

Examples

Mechanical hard drives

Block size used when testing significantly affects the number of IOPS performed by a given drive. See below for some typical performance figures:
Drive IOPS
IOPS
MB/s IOPS
MB/s MB/s
FC / 15K163 - 178151 - 1699.7 – 10.897 – 12349.7 – 63.173.5 – 127.5
SAS / 15K188 - 203175 - 19211.2 – 12.3115 – 13558.9 – 68.991.5 – 126.3
FC / 10K142 - 151130 – 1438.3 – 9.280 – 10440.9 – 53.158.1 – 107.2
SAS / 10K142 - 151130 – 1438.3 – 9.280 – 10440.9 – 53.158.1 – 107.2
SATA / 720073 - 7969 - 764.4 – 4.947 – 6324.3 – 32.143.4 – 97.8
SATA / 540057553.54422.6

Solid-state devices

DeviceTypeIOPSInterfaceNotes
Intel X25-M G2 SSD~8,600 IOPSSATA 3 Gbit/sIntel's data sheet claims 6,600/8,600 IOPS and 35,000 IOPS for random 4 KB writes and reads, respectively.
Intel X25-E SSD~5,000 IOPSSATA 3 Gbit/sIntel's data sheet claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.
G.Skill Phoenix ProSSD~20,000 IOPSSATA 3 Gbit/sSandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.
OCZ Vertex 3SSDUp to 60,000 IOPSSATA 6 Gbit/sRandom Write 4 kB
Corsair Force Series GTSSDUp to 85,000 IOPSSATA 6 Gbit/s240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 kB Test
Samsung SSD 850 PROSSD100,000 read IOPS
90,000 write IOPS
SATA 6 Gbit/s4 KB aligned random I/O at
10,000 read IOPS, 36,000 write IOPS at QD1
550 MB/s sequential read, 520 MB/s sequential write on 256 GB and larger models
550 MB/s sequential read, 470 MB/s sequential write on 128 GB model
Memblaze PBlaze5 910/916 NVMe SSDSSD1000K Random Read IOPS
303K Random Write IOPS
PCIe (NVMe)The performance data is from PBlaze5 C916 (6.4TB) NVMe SSD.
OCZ Vertex 4SSDUp to 120,000 IOPSSATA 6 Gbit/s256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4kB Test 90K IOPS, Random Write 4kB Test 85k IOPS
Texas Memory Systems RamSan-20SSD120,000+ Random Read/Write IOPSPCIeIncludes RAM cache
Fusion-io ioDriveSSD140,000 Read IOPS, 135,000 Write IOPSPCIe
Virident Systems tachIOnSSD320,000 sustained READ IOPS using 4kB blocks and 200,000 sustained WRITE IOPS using 4kB blocksPCIe
OCZ RevoDrive 3 X2SSD200,000 Random Write 4k IOPSPCIe
Fusion-io ioDrive DuoSSD250,000+ IOPSPCIe
WHIPTAIL, ACCELASSD250,000/200,000+ Write/Read IOPSFibre Channel, iSCSI, Infiniband/SRP, NFS, SMBFlash Based Storage Array
DDRdrive X1,SSD300,000+ and 200,000+ PCIe
SolidFire SF3010/SF6010SSD250,000 4kB Read/Write IOPSiSCSIFlash Based Storage Array
Intel SSD 750 SeriesSSD440,000 read IOPS
290,000 write IOPS
NVMe over PCIe 3.0 x4, U.2 and HHHL expansion card4 KB aligned random I/O with four workers at QD32, 1.2 TB model
Up to 2.4 GB/s sequential read, 1.2 GB/s sequential write
Samsung SSD 960 EVOSSD380,000 read IOPS
360,000 write IOPS
NVMe over PCIe 3.0 x4, M.24 kB aligned random I/O with four workers at QD4, 1 TB model
14,000 read IOPS, 50,000 write IOPS at QD1
330,000 read IOPS, 330,000 write IOPS on 500 GB model
300,000 read IOPS, 330,000 write IOPS on 250 GB model
Up to 3.2 GB/s sequential read, 1.9 GB/s sequential write
Samsung SSD 960 PROSSD440,000 read IOPS
360,000 write IOPS
NVMe over PCIe 3.0 x4, M.24kB aligned random I/O with four workers at QD4, 1 TB and 2 TB models
14,000 read IOPS, 50,000 write IOPS at QD1
330,000 read IOPS, 330,000 write IOPS on 512 GB model
Up to 3.5 GB/s sequential read, 2.1 GB/s sequential write
Texas Memory Systems RamSan-720 ApplianceFLASH/DRAM500,000 Optimal Read, 250,000 Optimal Write 4kB IOPSFC / InfiniBand
OCZ Single SuperScale Z-Drive R4 PCI-Express SSDSSDUp to 500,000 IOPSPCIe
WHIPTAIL, INVICTASSD650,000/550,000+ Read/Write IOPSFibre Channel, iSCSI, Infiniband/SRP, NFSFlash Based Storage Array
VIOLIN systems
Violin XVS 8
3RU Flash Memory ArrayAs Low as 50μs latency | 400μs latency @ 1M IOPS | 1ms latency @ 2M IOPS Dedupe LUN - 340,000 IOPS @ 1msFibre Channel, ISCSI
NVMe over FC
VIOLIN systems
XIO G4
SSD ArrayIOPs up to: 400,000 at <1ms latencyFibre Channel, ISCSI2U Dual-Controller Active/Active 8Gb FC2
4 ports per controller
Texas Memory Systems RamSan-630 ApplianceFlash/DRAM1,000,000+ 4kB Random Read/Write IOPSFC / InfiniBand
IBM FlashSystem 840Flash/DRAM1,100,000+ 4kB Random Read/600,000 4kB Write IOPS8G FC / 16G FC / 10G FCoE / InfiniBandModular 2U Storage Shelf - 4TB-48TB
Fusion-io ioDrive Octal SSD1,180,000+ Random Read/Write IOPSPCIe
OCZ 2x SuperScale Z-Drive R4 PCI-Express SSDSSDUp to 1,200,000 IOPSPCIe
Texas Memory Systems RamSan-70Flash/DRAM1,200,000 Random Read/Write IOPSPCIeIncludes RAM cache
Kaminario K2SSDUp to 2,000,000 IOPS.
1,200,000 IOPS in SPC-1 benchmark simulating business applications
FCMLC Flash
NetApp FAS6240 clusterFlash/Disk1,261,145 SPECsfs2008 nfsv3 IOPs using 1,440 15k disks, across 60 shelves, with virtual storage tiering.NFS, SMB, FC, FCoE, iSCSISPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. http://www.spec.org/sfs2008.
Fusion-io ioDrive2SSDUp to 9,608,000 IOPSPCIeOnly via demonstration so far.
E8 StorageSSDUp to 10 million IOPS10-100Gb EthernetRack scale flash appliance
EMC DSSD D5FlashUp to 10 million IOPSPCIe Out of Box, up to 48 clients with high availability.PCIe Rack Scale Flash Appliance. Product discontinued.
Pure Storage M50FlashUp to 220,000 32k IOPS <1ms average latency Up to 7 GB/s bandwidth16 Gbit/s Fibre Channel 10 Gbit/s Ethernet iSCSI 10 Gbit/s Replication ports 1 Gbit/s Management ports3U – 7U 1007 - 1447 Watts 95 lbs fully loaded + 44 lbs per expansion shelf 5.12” x 18.94” x 29.72” chassis
Nimble Storage AF9000FlashUp to 1.4 million IOPS16 Gbit/s Fibre Channel 10 Gbit/s Ethernet iSCSI 10 Gbit/s 1/10 Gbit/s Management ports3600 Watts - Up to 2,212 TB RAW capacity - up to 8 expansion shelves - 16 1/10 GBit iSCSI Mgmt Ports - optional 48 1/10 GBit iSCSI Ports - optional 96 8/16 GBit Fibrechannel Ports - Thermal