System parameters:
Remarks: As already stated before, it becomes more and more difficult to distinguish between clusters and what used to be called "integrated" parallel systems as in the latter type increasingly standard components are employed that also can be found in any cluster. For the new bullx systems, available from Bull since spring 2009 this is certainly the case. There are, however, a number of distiguishing features of the bullx systems that made us decide to discuss them in this overview. The systems come in two variants, one a blade system with 18 blades in a 7U chassis. The other, based on 1U units that pack 2 boards together, each containing 2 processors. The processor employed in both models is Intel's quad-core Westmere EP processor discussed on the Xeon page. The density of both types of systems is equal and it is up to the preference of the customer what type is chosen. The functionality is not entirely equal, however: in the R422 units a Bull-proprietary switch is present that connects the two boards in the unit and allows the 4 processors in the unit to work as an SMP node with 24 cores. For those who prefer the highest possible performance per core a variant in available that has 4 instead of 6 cores per processor but in that case the clock frequency is 3.06 GHz and, of course, the bandwidth per core is 1.5 times higher. The blade B500 system becomes hybrid, i.e., it integrates GPU accelerators by putting B505 blades in the system. The B505s have a double-blade form factor that contain 2 Westmere processors and two nVIDIA Tesla M1060 parts, the predecessor of the NVIDIA Fermi card (see the nVIDIA page). Peak performances for these card quoted are 991 Gflop/s in 32-bit precision and 78 Gflop/s in 64-bit precision. For the R42x E2-based system there is 1U enclosure containing an M1060 GPU processor in which case it is called a R423 unit. The R424 packs four processor in 2U. So, the density is the same as for the R422 but it has more reliability features built in. The same goes for the R425 which contains 4 Westmere processors and 2 M1060 GPUs. For both systems QDR Infiniband (see the Infiniband page) is available as an interconnection medium for the highest transfer speed. However, also DDR Infiniband of GbE Ethernet could be chosen in case of the R442 E2-based product. As in the blade-based model a 36 QDR port module is integrated in the 7U chassis holding the blades, only the QDR option is present here. Of course the topology between chassis or rack units is up to the customer and therefore variable with respect to global bandwidth and point-to-point latencies
Measured Performances: |