Pascal (microarchitecture)
Pascal is the codename for a GPU microarchitecture developed by Nvidia, as the successor to the Maxwell architecture. The architecture was first introduced in April 2016 with the release of the Tesla P100 on April 5, 2016, and is primarily used in the GeForce 10 series, starting with the GeForce GTX 1080 and GTX 1070, which were released on May 17, 2016 and June 10, 2016 respectively. Pascal was manufactured using TSMC's 16nm FinFET process, and later Samsung's 14nm FinFET process.
The architecture is named after the 17th century French mathematician and physicist, Blaise Pascal.
On March 18, 2019, Nvidia announced that in a driver due for April 2019, they would enable DirectX Raytracing on Pascal-based cards starting with the GTX 1060 6 GB, and in the 16 series cards, a feature reserved to the Turing-based RTX series up to that point.
Details
In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016 and released on May 27 of the same year. The Tesla P100 has a different version of the Pascal architecture compared to the GTX GPUs. The shader units in GP104 have a Maxwell-like design.Architectural improvements of the GP100 architecture include the following:
- In Pascal, an SM consists of between 64-128 CUDA cores, depending on if it is GP100 or GP104. Maxwell packed 128, Kepler 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units.
- CUDA Compute Capability 6.1.
- High Bandwidth Memory 2 — some cards feature 16 GiB HBM2 in four stacks with a total of 4096-bit bus with a memory bandwidth of 720 GB/s.
- Unified memory — a memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine".
- NVLink — a high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.
- 16-bit floating-point operations can be executed at twice the rate of 32-bit floating-point operations and 64-bit floating-point operations executed at half the rate of 32-bit floating point operations.
- More registers — twice the amount of registers per CUDA core compared to Maxwell.
- More shared memory.
- Dynamic load balancing scheduling system. This allows the scheduler to dynamically adjust the amount of the GPU assigned to multiple tasks, ensuring that the GPU remains saturated with work except when there is no more work that can safely be distributed to distribute. Nvidia therefore has safely enabled asynchronous compute in Pascal's driver.
- Instruction-level and thread-level preemption.
- CUDA Compute Capability 6.1.
- GDDR5X — new memory standard supporting 10Gbit/s data rates, updated memory controller.
- Simultaneous Multi-Projection - generating multiple projections of a single geometry stream, as it enters the SMP engine from upstream shader stages.
- DisplayPort 1.4, HDMI 2.0b.
- Fourth generation Delta Color Compression.
- Enhanced SLI Interface — SLI interface with higher bandwidth compared to the previous versions.
- PureVideo Feature Set H hardware video decoding HEVC Main10, Main12 and VP9 hardware decoding.
- HDCP 2.2 support for 4K DRM protected content playback and streaming.
- NVENC HEVC Main10 10bit hardware encoding.
- GPU Boost 3.0.
- Instruction-level preemption. In graphics tasks, the driver restricts this to pixel-level preemption because pixel tasks typically finish quickly and the overhead costs of doing pixel-level preemption are much lower than performing instruction-level preemption. Compute tasks get thread-level or instruction-level preemption. Instruction-level preemption is useful, only because compute tasks can take long times to finish and there are no guarantees on when a compute task finishes, so the driver enables the very expensive instruction-level preemption for these tasks.
Overview
Graphics Processor Cluster
A chip is partitioned into Graphics Processor Clusters. For the GP104 chips, a GPC encompasses 5 SMs.Streaming Multiprocessor "Pascal"
A "Streaming Multiprocessor" corresponds to AMD's Compute Unit. An SMP encompasses 128 single-precision ALUs on GP104 chips and 64 single-precision ALUs on GP100 chips.What AMD calls a CU can be compared to what Nvidia calls an SM. While all CU versions consist of 64 shader processors, Nvidia experimented with very different numbers:
- On Tesla 1 SM combines 8 single-precision shader processors
- On Fermi 1 SM combines 32 single-precision shader processors
- On Kepler 1 SM combines 192 single-precision shader processors and also 64 double-precision units
- On Maxwell 1 SM combines 128 single-precision shader processors
- On Pascal it depends:
- * On the GP100 1 SM combines 64 single-precision shader processors and also 32 double-precision providing a 2:1 ratio of single- to double-precision throughput. The GP100 uses more flexible FP32 cores that are able to process one single-precision or two half-precision numbers in a two-element vector. Nvidia intends to address the calculation of algorithms related to deep learning with those.
- * On the GP104 1 SM combines 128 single-precision ALUs, 4 double-precision ALUs providing a 32:1 ratio, and one half-precision ALU that contains a vector of two half-precision floats which can execute the same instruction on both floats providing a 64:1 ratio if the same instruction is used on both elements.
Polymorph-Engine 4.0
Chips
- GP100: Nvidia Tesla P100 GPU accelerator is targeted at GPGPU applications such as FP64 double precision compute and deep learning training that uses FP16. It uses HBM2 memory. Quadro GP100 also uses the GP100 GPU.
- GP102: This GPU is used in the TITAN Xp, Titan X and the GeForce GTX 1080 Ti. It is also used in the Quadro P6000 & Tesla P40.
- GP104: This GPU is used in the GeForce GTX 1070, GTX 1070 Ti and the GTX 1080. The GTX 1070 has 15/20 and the GTX 1070 Ti has 19/20 of its SMs enabled. Both are connected to GDDR5 memory, while the GTX 1080 is a full chip and is connected to GDDR5X memory. It is also used in the Quadro P5000, Quadro P4000 and Tesla P4.
- GP106: This GPU is used in the GeForce GTX 1060 with GDDR5/GDDR5X memory. It is also used in the Quadro P2000.
- GP107: This GPU is used in the GeForce GTX 1050 Ti and GeForce GTX 1050. It is also used in the Quadro P1000, Quadro P600, Quadro P620 & Quadro P400.
- GP108: This GPU is used in the GeForce GT 1030.
GK104 | GK110 | GM204 | GM204 | GM200 | GP104 | GP100 | |
Dedicated texture cache per SM | 48 KiB | ||||||
Texture or read-only data cache per SM | 48 KiB | ||||||
Programmer-selectable shared memory/L1 partitions per SM | 48 KiB shared memory + 16 KiB L1 cache | 48 KiB shared memory + 16 KiB L1 cache | rowspan="3" | rowspan="3" | rowspan="3" | rowspan="3" | rowspan="3" |
Programmer-selectable shared memory/L1 partitions per SM | 32 KiB shared memory + 32 KiB L1 cache | 32 KiB shared memory + 32 KiB L1 cache | - | - | - | - | - |
Programmer-selectable shared memory/L1 partitions per SM | 16 KiB shared memory + 48 KiB L1 cache | 16 KiB shared memory + 48 KiB L1 cache | - | - | - | - | - |
Unified L1 cache/texture cache per SM | 48 KiB | 48 KiB | 48 KiB | 48 KiB | 24 KiB | ||
Dedicated shared memory per SM | 96 KiB | 96 KiB | 96 KiB | 96 KiB | 64 KiB | ||
L2 cache per chip | 512 KiB | 1536 KiB | 1792 KiB | 2048 KiB | 3072 KiB | 2048 KiB | 4096 KiB |
Performance
The theoretical single-precision processing power of a Pascal GPU in GFLOPS is computed as 2 X × number of CUDA cores × core clock speed.The theoretical double-precision processing power of a Pascal GPU is 1/2 of the single precision performance on Nvidia GP100, and 1/32 of Nvidia GP102, GP104, GP106, GP107 & GP108.
The theoretical half-precision processing power of a Pascal GPU is 2× of the single precision performance on GP100 and 1/64 on GP104, GP106, GP107 & GP108.