We only discuss the latest model, the AuroraHPC 10-10 here as the earlier model has the same macro-architecture but less powerful Nehalem EP instead of Ivy Bridge processors.
The Aurora system has most characteristics of the average cluster but there are a number of distinguishing factors that warrant its description in this report. For instance, one can choose for SSD storage instead of spinning disks. anymore. The liquid cooling on a per node basis also contributes to the energy efficiency as no power is used for memory that is not active.
The interconnect infrastructure is out of the ordinary in comparison with the standard cluster. It has a QDR Infiniband network in common with other clusters but, in addition, it also contains a 3-D torus network. Together Eurotech calls this its Unified Network Architecture with a latency of about 1 µs and a point-to-point bandwidth of 2.5 GB/s. The network processor is in fact a rather large Altera Stratix IV FPGA that provides the possibility of reconfiguration of the network and hardware synchonisation of MPI primitives.
An Aurora node consists of two 12-core Intel Ivy Bridge processors, each with their associated DDR3 memory. So, 32 GB at a maximum per node. Via the Tylersburg bridge the network processor is connected through PCIe Gen2 to the network processor, containing the Stratix FPGA, that drives the 3-D network and a Mellanox ConnectX Infiniband HCA.
In principle the FPGA has sufficient capacity also to be used as a computational accelerator but Eurotech has no fixed plans yet to offer it as such. Eurotech does not give a maximum configuration for the Aurora but the brochures suggest that it considers building a Petaflop system (10 racks) is entirely possible.\\ Although the Aurora documentation is not very clear on the software that is available it is evident that Linux is the OS and the usual Intel compiler suite is available. The MPI version is optimised for the architecture but system-agnostic MPI versions can also be used.
At the time of writing this report official no performance figures from the AuroraHPC 10-10 are available.
The Eurotech Aurora Tigon
Like in the Bull systems Eurotech markets an accelerator-enhanced sytem called the Tigon. In the Tigon 2 of the standard CPUs in a node can be replaced by either NVIDIA Kepler K20Xs or by Intel Xeon Phis (or any mix thereof). This should lead to a peak performance that is about 3.5 higher than of a CPU-only rack: ≈ 350 Tflop/s at a power consumption of about 100 kW/rack.
At the time of writing this report official no performance figures from the Aurora Tigon are available.