System parameters:
Remarks: There is a multitude of high end servers in the eServer p-series. However, IBM singles out the POWER6 based p575 model specifically for HPC. The eServer p575 is the successor of the earlier POWER5+ based systems. It retains much of the macro structure of this system: multi-CPU nodes are connected within a frame either by a dedicated switch or by other means, like switched Ethernet. The structure of the nodes, however, has changed considerably, see POWER6. Four dual-core POWER6 processors are housed in a Multi-Chip Module (MCM) while four of these constitute a p575 node. So, 32 cores make up a node. The 4 MCMs are all directly connected to each other at a bandwidth of 80 GB/s. The inter-MCM links are used to reach the memory modules that are not local to a core but within a node. Therefore, all memory in a node is shared by the processor cores, although the memory access is no longer uniform as in earlier p575 models. As yet no NUMA factor is published but, given the node structure, it should be moderate. Obviously, within a node shared-memory parallel programming as with OpenMP can be employed. In contrast to its earlier p575 clusters, IBM does not provide its proprietary Federation switch anymore for inter-node communication. Instead, one can choose to configure a network from any vendor. In practice this will turn out to be InfiniBand in most cases, but also switched Gigabit Ethernet, Myrinet or a Quadrics network is enterily possible. For this reason it is not possible to give inter-node bandwidth values as this is to be chosen by the user.
At this moment nowhere is to be found what the maximum configuration of a
POWER6-based p575 configuration would be. The online information at present is
not definitive and consistently speaks of planned characteristics of
the POWER6-based systems, although several machines already have been installed
and are in operation. At ECMWF, UK a 156 Tflop/s system is installed, at NCAR,
USA a 71 Tflop/s, and a 60 Tflop/s at SARA in The Netherlands. Because of this
lack of information, we cannot give details about maximum performance, memory,
etc. Applications can be run using PVM or MPI. IBM used to support High Performance Fortran, both a proprietary version and a compiler from the Portland Group. It is not clear whether this is still the case. IBM uses its own PVM version from which the data format converter XDR has been stripped. This results in a lower overhead at the cost of generality. Also the MPI implementation, MPI-F, is optimised for the p575-based systems. As the nodes are in effect shared-memory SMP systems, within the nodes OpenMP can be employed for shared-memory parallelism and it can be freely mixed with MPI if needed. In addition to its own AIX OS IBM also supports some Linux distributions: the professional version of SuSE Linux is available for the p575 series.
Measured Performances: |