NASA Advanced Supercomputing Division


The NASA Advanced Supercomputing Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for over thirty years.
The facility currently houses the petascale Pleiades, Aitken, and Electra supercomputers, as well as the terascale Endeavour supercomputer. The systems are based on SGI and HPE architecture with Intel processors. The main building also houses disk and archival tape storage systems with a capacity of over an exabyte of data, the hyperwall visualization system, and one of the largest InfiniBand network fabrics in the world. The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability Project.

History

Founding

In the mid-1970s, a group of aerospace engineers at Ames Research Center began to look into transferring aerospace research and development from costly and time-consuming wind tunnel testing to simulation-based design and engineering using computational fluid dynamics models on supercomputers more powerful than those commercially available at the time. This endeavor was later named the Numerical Aerodynamic Simulator Project and the first computer was installed at the Central Computing Facility at Ames Research Center in 1984.
Groundbreaking on a state-of-the-art supercomputing facility took place on March 14, 1985 in order to construct a building where CFD experts, computer scientists, visualization specialists, and network and storage engineers could be under one roof in a collaborative environment. In 1986, NAS transitioned into a full-fledged NASA division and in 1987, NAS staff and equipment, including a second supercomputer, a Cray-2 named Navier, were relocated to the new facility, which was dedicated on March 9, 1987.
In 1995, NAS changed its name to the Numerical Aerospace Simulation Division, and in 2001 to the name it has today.

Industry Leading Innovations

NAS has been one of the leading innovators in the supercomputing world, developing many tools and processes that became widely used in commercial supercomputing. Some of these firsts include:

Software Development

NAS develops and adapts software in order to "complement and enhance the work performed on its supercomputers, including software for systems support, monitoring systems, security, and scientific visualization," and often provides this software to its users through the NASA Open Source Agreement.
A few of the important software developments from NAS include:
Since its construction in 1987, the NASA Advanced Supercomputing Facility has housed and operated some of the most powerful supercomputers in the world. Many of these computers include testbed systems built to test new architecture, hardware, or networking set-ups that might be utilized on a larger scale. Peak performance is shown in Floating Point Operations Per Second.
Computer NameArchitecturePeak PerformanceNumber of CPUsInstallation Date
Cray XMP-12210.53 megaflops11984
NavierCray 21.95 gigaflops41985
ChuckConvex 38201.9 gigaflops81987
PierreThinking Machines CM214.34 gigaflops16,0001987
PierreThinking Machines CM243 gigaflops48,0001991
StokesCray 21.95 gigaflops41988
PiperCDC/ETA-10Q840 megaflops41988
ReynoldsCray Y-MP2.54 gigaflops81988
ReynoldsCray Y-MP2.67 gigaflops881988
LagrangeIntel iPSC/8607.88 gigaflops1281990
GammaIntel iPSC/8607.68 gigaflops1281990
von KarmanConvex 3240200 megaflops41991
BoltzmannThinking Machines CM516.38 gigaflops1281993
SigmaIntel Paragon15.60 gigaflops2081993
von NeumannCray C9015.36 gigaflops161993
EagleCray C907.68 gigaflops81993
GraceIntel Paragon15.6 gigaflops2091993
BabbageIBM SP-234.05 gigaflops1281994
BabbageIBM SP-242.56 gigaflops1601994
da VinciSGI Power Challenge161994
da VinciSGI Power Challenge XL11.52 gigaflops321995
NewtonCray J907.2 gigaflops361996
PigletSGI Origin 2000/250 MHz4 gigaflops81997
TuringSGI Origin 2000/195 MHz9.36 gigaflops241997
TuringSGI Origin 2000/195 MHz25 gigaflops641997
FermiSGI Origin 2000/195 MHz3.12 gigaflops81997
HopperSGI Origin 2000/250 MHz32 gigaflops641997
EvelynSGI Origin 2000/250 MHz4 gigaflops81997
StegerSGI Origin 2000/250 MHz64 gigaflops1281997
StegerSGI Origin 2000/250 MHz128 gigaflops2561998
LomaxSGI Origin 2800/300 MHz307.2 gigaflops5121999
LomaxSGI Origin 2800/300 MHz409.6 gigaflops5122000
LouSGI Origin 2000/250 MHz4.68 gigaflops121999
ArielSGI Origin 2000/250 MHz4 gigaflops82000
SebastianSGI Origin 2000/250 MHz4 gigaflops82000
SN1-512SGI Origin 3000/400 MHz409.6 gigaflops5122001
BrightCray SVe1/500 MHz64 gigaflops322001
ChapmanSGI Origin 3800/400 MHz819.2 gigaflops1,0242001
ChapmanSGI Origin 3800/400 MHz1.23 teraflops1,0242002
Lomax IISGI Origin 3800/400 MHz409.6 gigaflops5122002
KalpanaSGI Altix 30002.66 teraflops5122003
Cray X1204.8 gigaflops2004
ColumbiaSGI Altix 300063 teraflops10,2402004
ColumbiaSGI Altix 470010,2962006
ColumbiaSGI Altix 470085.8 teraflops13,8242007
SchirraIBM POWER5+4.8 teraflops6402007
RT JonesSGI ICE 8200, Intel Xeon "Harpertown" Processors43.5 teraflops4,0962007
PleiadesSGI ICE 8200, Intel Xeon "Harpertown" Processors487 teraflops51,2002008
PleiadesSGI ICE 8200, Intel Xeon "Harpertown" Processors544 teraflops56,3202009
PleiadesSGI ICE 8200, Intel Xeon "Harpertown"/"Nehalem" Processors773 teraflops81,9202010
PleiadesSGI ICE 8200/8400, Intel Xeon "Harpertown"/"Nehalem"/"Westmere" Processors1.09 petaflops111,1042011
PleiadesSGI ICE 8200/8400/X, Intel Xeon "Harpertown"/"Nehalem"/"Westmere"/"Sandy Bridge" Processors1.24 petaflops125,9802012
PleiadesSGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/"Ivy Bridge" Processors2.87 petaflops162,4962013
PleiadesSGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/"Ivy Bridge" Processors3.59 petaflops184,8002014
PleiadesSGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/"Haswell" Processors4.49 petaflops198,4322014
PleiadesSGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/"Haswell" Processors5.35 petaflops210,3362015
PleiadesSGI ICE X, Intel Xeon "Sandy Bridge"/"Ivy Bridge"/"Haswell"/"Broadwell" Processors7.25 petaflops246,0482016
EndeavourSGI UV 2000, Intel Xeon "Sandy Bridge" Processors32 teraflops1,5362013
MeropeSGI ICE 8200, Intel Xeon "Harpertown" Processors61 teraflops5,1202013
MeropeSGI ICE 8400, Intel Xeon "Nehalem"/"Westmere" Processors141 teraflops1,1522014
ElectraSGI ICE X, Intel Xeon "Broadwell" Processors1.9 petaflops1,1522016
ElectraSGI ICE X/HPE SGI 8600 E-Cell, Intel Xeon "Broadwell"/"Skylake" Processors4.79 petaflops2,3042017
ElectraSGI ICE X/HPE SGI 8600 E-Cell, Intel Xeon "Broadwell"/"Skylake" Processors8.32 petaflops3,4562018
AitkenHPE SGI 8600 E-Cell, Intel Xeon "Cascade Lake" Processors3.69 petaflops1,1502019
Computer NameArchitecturePeak PerformanceNumber of CPUsInstallation Date

Storage Resources

Disk Storage

In 1987, NAS partnered with the Defense Advanced Research Projects Agency and the University of California, Berkeley in the Redundant Array of Inexpensive Disks project, which sought to create a storage technology that combined multiple disk drive components into one logical unit. Completed in 1992, the RAID project lead to the distributed data storage technology used today.
The NAS facility currently houses disk mass storage on an SGI parallel DMF cluster with high-availability software consisting of four 32-processor front-end systems, which are connected to the supercomputers and the archival tape storage system. The system has 192 GB of memory per front-end and 7.6 petabytes of disk cache. Data stored on disk is regularly migrated to the tape archival storage systems at the facility to free up space for other user projects being run on the supercomputers.

Archive and Storage Systems

In 1987, NAS developed the first UNIX-based hierarchical mass storage system, named NAStore. It contained two StorageTek 4400 cartridge tape robots, each with a storage capacity of approximately 1.1 terabytes, cutting tape retrieval time from 4 minutes to 15 seconds.
With the installation of the Pleiades supercomputer in 2008, the StorageTek systems that NAS had been using for 20 years were unable to meet the needs of the greater number of users and increasing file sizes of each project's datasets. In 2009, NAS brought in Spectra Logic T950 robotic tape systems which increased the maximum capacity at the facility to 16 petabytes of space available for users to archive their data from the supercomputers. As of March 2019, the NAS facility increased the total archival storage capacity of the Spectra Logic tape libraries to 1,048 petabytes with 35% compression. SGI's Data Migration Facility and OpenVault manage disk-to-tape data migration and tape-to-disk de-migration for the NAS facility.
As of March 2019, there is over 110 petabytes of unique data stored in the NAS archival storage system.

Data Visualization Systems

In 1984, NAS purchased 25 SGI IRIS 1000 graphics terminals, the beginning of their long partnership with the Silicon Valley-based company, which made a significant impact on post-processing and visualization of CFD results run on the supercomputers at the facility. Visualization became a key process in the analysis of simulation data run on the supercomputers, allowing engineers and scientists to view their results spatially and in ways that allowed for a greater understanding of the CFD forces at work in their designs.

The hyperwall

In 2002, NAS visualization experts developed a visualization system called the "hyperwall" which included 49 linked LCD panels that allowed scientists to view complex datasets on a large, dynamic seven-by-seven screen array. Each screen had its own processing power, allowing each one to display, process, and share datasets so that a single image could be displayed across all screens or configured so that data could be displayed in "cells" like a giant visual spreadsheet.
The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion pixels, making it the highest resolution scientific visualization system in the world. It contains 128 nodes, each with two quad-core AMD Opteron processors and a Nvidia GeForce 480 GTX graphics processing unit for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall. The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.
In 2014, the hyperwall was upgraded with new hardware: 128 Intel Xeon "Ivy Bridge" processors and NVIDIA Geforce 780 Ti GPUs. The upgrade increased the system's peak processing power from 9 teraflops to 57 teraflops, and now has nearly 400 gigabytes of graphics memory.

Concurrent Visualization

An important feature of the hyperwall technology developed at NAS is that it allows for "concurrent visualization" of data, which enables scientists and engineers to analyze and interpret data while the calculations are running on the supercomputers. Not only does this show the current state of the calculation for runtime monitoring, steering, and termination, but it also "allows higher temporal resolution visualization compared to post-processing because I/O and storage space requirements are largely obviated... may show features in a simulation that would otherwise not be visible."
The NAS visualization team developed a configurable concurrent pipeline for use with a massively parallel forecast model run on the Columbia supercomputer in 2005 to help predict the Atlantic hurricane season for the National Hurricane Center. Because of the deadlines to submit each of the forecasts, it was important that the visualization process would not significantly impede the simulation or cause it to fail.

NASA Advanced Supercomputing Resources