GPI-Space


GPI-Space is a parallel programming development software, developed by the Fraunhofer Institute for Industrial Mathematics. The main concept behind the software is separation of domain and HPC knowledge and leaving each part to the respective experts while the GPI-Space as framework integrates both parts together.
GPI-Space is making use of GPI to solve big data problems more efficient than current solutions.
GPI-Space was first introduced in a domain-specific version for geology, under the name SDPA at SEG 2010 in Houston.

Core layers

GPI Space comes with several layers, that make up the core of the parallel programming development software.

Runtime engine

The runtime engine is responsible to distribute the available jobs across the available systems. In a large scale HPC clusters, these can be heterogeneous and consist of traditional compute nodes as well as nodes with accelerator cards, such as GPUs or Intel's Xeon Phi. Besides the mere scheduling and distribution of jobs, the runtime engine is also adding fault-tolerance. Jobs are monitored after they have been assigned and reassigned to different resources, in case the initially assigned hardware fails. New hardware can be added dynamically.

Workflow engine

The workflow engine translates instructions from an existing workflow in XML format with special GPI-Space tags into the runtime environments internal instructions which are based on Petri nets. Workflows can be arbitrary modular and use other workflows as elements, thus allowing users to predefine building blocks once and then use them in future, more complicated workflows. A graphical editor for workflows is available.

Autoparallelization engine

The autoparallelization engine decides about how to ideally execute code that is fed into the system in parallel. This relieves domain programmers from the need for parallelizing their own code and leaves them focusing on their domain. HPC knowledge and experience by Fraunhofer ITWM's Competence-Center High-Performance Computing is an essential contributor to the engine's capability of generating highly optimal parallel codes.

Virtual memory layer

All computation with GPI-Space can be done using a fast parallel file system, such as BeeGFS, which is very similar to other Big Data solutions available. But beyond this, GPI-Space is capable of doing all computation in memory, as well, thus omitting the higher latencies and performance bottlenecks of traditional I/O. Using Fraunhofer GPI, one big block of a partitioned global address space is dynamically allocated. The RDMA capability allows for fast, single sided communication. Disk transfers to and from the virtual memory are completely asynchronous and hidden behind computation.

Seismic Development and Programming Architecture (SDPA)

To showcase the validity of the GPI-Space approach, Fraunhofer first introduced it as part of the Seismic Development and Programming Architecture during SEG 2010 in Houston, TX to the community. In the seismic domain exist countless legacy algorithms and codes in a variety of programming languages that have been developed over years, but that are not parallelized. Due to limited resources, it is often not feasible to rewrite those codes from scratch in a parallel version and one single programming language.
Developers at the CC-HPC have put together domain specific solutions for seismic data that includes:
In addition, there is a set of basic workflows that can be used as building blocks for more sophisticated workflows by the end user. All these components solve the parallelization problem for the seismic domain, so the domain developer can focus on his problem, without having to deal with it.
An end user of SDPA can then simply execute existing legacy codes and modules in any language in parallel with SDPA, reducing turnover time for projects significantly. SDPA is also used as a fast way to prototype new ideas and algorithms for parallel execution.
SDPA is used by several of Fraunhofer's industry partners in a production environment.