Single instruction, multiple threads


Single instruction, multiple thread is an execution model used in parallel computing where single instruction, multiple data is combined with multithreading. It is different from SPMD in that all instructions in all "threads" are ran in lock-step. The SIMT execution model has been implemented on several GPUs and is relevant for general-purpose computing on graphics processing units, e.g. some supercomputers combine CPUs with GPUs.
The processors, say a number of them, seem to execute many more than tasks. This is achieved by each processor having multiple "threads", which execute in lock-step, and are analogous to SIMD lanes.

History

SIMT was introduced by Nvidia in the Tesla GPU microarchitecture with the G80 chip. ATI Technologies released a competing product slightly later on May 14, 2007, the TeraScale 1-based "R600" GPU chip.

Description

As access time of all the widespread RAM types is still relatively high, engineers came up with the idea to hide the latency that inevitably comes with each memory access. Strictly, the latency-hiding is a feature of the zero-overhead scheduling implemented by modern GPUs. This might or might not be considered to be a property of 'SIMT' itself.
SIMT is intended to limit instruction fetching overhead, i.e. the latency that comes with memory access, and is used in modern GPUs in combination with 'latency hiding' to enable high-performance execution despite considerable latency in memory-access operations. This is where the processor is oversubscribed with computation tasks, and is able to quickly switch between tasks when it would otherwise have to wait on memory. This strategy is comparable to multithreading in CPUs. As with SIMD, another major benefit is the sharing of the control logic by many data lanes, leading to an increase in computational density. One block of control logic can manage N data lanes, instead of replicating the control logic N times.
A downside of SIMT execution is the fact that thread-specific control-flow is performed using "masking", leading to poor utilization where a processor's threads follow different control-flow paths. For instance, to handle an IF-ELSE block where various threads of a processor execute different paths, all threads must actually process both paths, but masking is used to disable and enable the various threads as appropriate. Masking is avoided when control flow is coherent for the threads of a processor, i.e. they all follow the same path of execution. The masking strategy is what distinguishes SIMT from ordinary SIMD, and has the benefit of inexpensive synchronization between the threads of a processor.
Nvidia CUDAOpenCLHennessy & Patterson
ThreadWork-itemSequence of SIMD Lane operations
WarpWavefrontThread of SIMD Instructions
BlockWorkgroupBody of vectorized loop
GridNDRangeVectorized loop