System parameters:
Remarks: As already stated before, it becomes more and more difficult to distinguish between clusters and what used to be called "integrated" parallel systems as in the latter type increasingly standard components are employed that also can be found in any cluster. For the new bullx systems, available from Bull since spring 2009 this is certainly the case. There are, however, a number of distiguishing features of the bullx systems that made us decide to discuss them in this overview. The systems come in two variants, two a blade systems with 18 blades in a 7U chassis. The two blade systems are similar except in the cooling: the B510 series is air-cooled while the equivalent B710 series has direct liquid cooling, i.e., water is run through a copper heat sink blade fitted onto the board that holds the compute components like the CPUs (and/or accelerators, see below) and the memory. The other type of system based on 1U units that pack 2 boards together, each containing 2 processors. The processor employed in both models is Intel's 12-core Ivy Bridge processor discussed on the Xeon page. The density of both types of systems is equal and it is up to the preference of the customer what type is chosen. The blade B510 system becomes hybrid, i.e., it integrates GPU accelerators by putting B515 blades in the system. The B515s have a double-blade form factor that contain 2 Ivy Bridge processors and two nVIDIA Kepler K20X cards (see the nVIDIA page). Likewise, Intel's Xeon Phi accelerators are offered in the same form factor, both having a peak speed of over 1 Tflop/s. For the R42x E2-based system there is 1U enclosure containing a K20X GPU processor in which case it is called a R423 unit. The R424 packs four processor in 2U. So, the density is the same as for the R422 but it has more reliability features built in. The same goes for the R425 which contains 4 Ivy Bridge processors and 2 K20X GPUs. The F2 model is identical to the E2 model, except that it allows for extended storage with SAS disks and RAID disks. In all cases also Intel Xeon Phis are available accelerators. Both for the blade and the rack systems instead of spinning disks also SSD storage is supported. For the rack systems QDR Infiniband (see the Infiniband page) is available as an interconnection medium. For in the blade-based models a 36 QDR or FDR port module is integrated in the 7U chassis holding the blades. Of course the topology between chassis or rack units is up to the customer and therefore variable with respect to global bandwidth and point-to-point latencies.
Measured Performances:
The Bull bullx S6010/S6030 systems
System parameters:
Remarks:
The packaging of the S6010 is rather odd: The node is L-shaped and by flipping
it over one can fit it on top of another S6010 node such that it fits in a 3U
rack space. The S6030 has a height of 3U and contains the same components as two
S6010s but in addition more PCIe slots: 2 PCIe Gen2 × 16 and 4 PCIe
× 8 against 1 PCIe × 16/S6010. Furthermore it can house much more
disk storage: 6 SATA disks against 1 in the S6010 and up to 8 SAS disks or SATA
SSD units. Clearly, the S6010 it more targeted at the computational tasks, while
the S630 also is well-equiped for server tasks.
Measured Performances: |