Skip to main content

Zorn

Zorn was a very early experimental GPU-based system that was established to help Swedish researchers evaluate the benefits of GPUs for highly parallel computing.

Background

PDC started experimenting with the use of GPUs for accelerating academic research computation in 2010 with a small initial installation, that ended up being known as Zorn. It was supported by a grant from the Knut and Alice Wallenberg Foundation that was awarded to a research group at KTH involved in the development of advanced simulation software.

The Swedish National Infrastructure for Computing (SNIC) also picked up on the trend and started a large-scale evaluation of GPU technology. That provided an opportunity for PDC to triple the computational power of the initial system. In particular, in 2012, SNIC started a three-year pilot project to evaluate the potential of graphics processing unit (GPU) technology for research purposes. Two experimental clusters equipped with NVIDIA graphics cards were made available to Swedish researchers through this project. It started with the establishment of the Zorn system (also referred to as a “ cluster”) at PDC in the beginning of 2012 and continued with another cluster, Erik at the Lunarc Centre for Scientific and Technical Computing, Lund University, in 2013. Over the course of the project, many research groups used these clusters to try out GPU technology. The performance of GPU-accelerated systems was impressive and lead to an increasing demand for computational resources that supported computing with GPUs. Zorn was decommissioned several years later in March 2015, when the main system at PDC was the CPU-based Beskow system, but its successes contributed to the fact that the next flagship system at PDC had a heterogeneous architecture incorporating both CPU- and GPU-based partitions.

As Zorn utilised graphics processing units (GPUs), it was named after the Swedish painter Anders Zorn , who was also a skilled sculptor and etcher. The Zorn system was surprisingly small but nevertheless had impressive capabilities: Zorn reached a peak performance of more than 45 teraflops. Its 12 compute nodes were packed with 40 GPUs and connected through an Infiniband QDR (Quad Data Rate) network. The local disk capacity for everyday usage amounted to 10 TB. Systems with comparable computational power first started to appear on the TOP500 list about a decade earlier. They were big installations at that time and required several hundred kilowatts of electrical power to run − in comparison, Zorn fitted into a single rack and used less than 20 kilowatts. Such an exciting development over just a few years was made possible by using graphics processing units for numerical simulations. This started about ten years previously with enthusiasts who used the programmable components of graphics cards through clumsy interfaces for calculations (rather than using the cards for their original aim of producing images on computer screens). Manufacturers soon recognised the potential of this idea, and have since been driving development to make GPUs equally well-suited both for the display of graphics and for performing numerical simulations.

Specifications

Initial GPU test cluster

  • 1 node with 4 × NVIDIA GTX580
  • 3 nodes, each with 4 × NVIDIA Tesla C2050

Upgraded GPU cluster

  • 8 nodes with
    • 92 GB RAM
    • 2 × CPU Intel Xeon E5620 (Nehalem/Westmere)
    • 3 × NVIDIA Tesla M2090
    • NVIDIA CUDA Toolkit V.5.5 (default queue)
  • 1 Node with 48 GB RAM
    • 2 × CPU Intel Xeon E5620 (Nehalem/Westmere)
    • 1 × NVIDIA Tesla K20
    • 1 × NVIDIA Tesla C2050
    • NVIDIA CUDA Toolkit V.5.5 ("kepler" queue)
  • QDR Infiniband interconnect
  • Lustre file system with 15 TB
  • CentOS 6.5 derived from Red Hat Enterprise Linux
Four of Zorn’s compute nodes