The IBM System Cluster 1350

Introduction
HPC Architecture
  1. Shared-memory SIMD machines
  2. Distributed-memory SIMD machines
  3. Shared-memory MIMD machines
  4. Distributed-memory MIMD machines
  5. ccNUMA machines
  6. Clusters
  7. Processors
    1. AMD Magny-Cours
    2. IBM POWER6
    3. IBM POWER7
    4. IBM PowerPC 970MP
    5. IBM BlueGene processors
    6. Intel Xeon
    7. The SPARC processors
  8. Accelerators
    1. GPU accelerators
      1. ATI/AMD
      2. nVIDIA
    2. General accelerators
      1. The IBM/Sony/Toshiba Cell processor
      2. ClearSpeed/Petapath
    3. FPGA accelerators
      1. Convey
      2. Kuberre
      3. SRC
  9. Networks
    1. Infiniband
    2. InfiniPath
    3. Myrinet
Available systems
  • The Bull bullx system
  • The Cray XE6
  • The Cray XMT
  • The Cray XT5h
  • The Fujitsu FX1
  • The Hitachi SR16000
  • The IBM BlueGene/L&P
  • The IBM eServer p575
  • The IBM System Cluster 1350
  • The NEC SX-9
  • The SGI Altix UV series
  • Systems disappeared from the list
    Systems under development
    Glossary
    Acknowledgments
    References

    Machine type RISC-based distributed-memory multi-processor.
    Models IBM System Cluster 1350.
    Operating system Linux (RedHat EL4/5, SuSE SLES 10), Windows Server 2008.
    Connection structure Variable (see remarks)
    Compilers XL Fortran 90, (HPF), XL C, C++
    Vendors information Web page http://www.ibm.com/systems/clusters/hardware/1350.html
    Year of introduction 2005--2008, dependent on blade/rack type.

    System parameters:

    Model IBM System Cluster 1350
    No. of processors 2–1024

    Remarks:

    The IBM System Cluster 1350 is one of the systems reffered to in the introduction of this section. The choice of components is so wide that not a single description of the system can be given. The only constant factor that can be given about the system is the amount of processors. The system can house a bewildering number of different rack units or blades, including models with AMD Opterons, PowerPC 970MPs, POWER6s, or Cell BE processors. The choice of the interconnect network can also be more or less arbitrary: Gigabit Ethernet, InfiniBand, Myrinet, etc. A very large system build on this technology is the Mare Nostrum machine at the Barcelona Supercomputing Centre which has a cluster of ten 1350 Cluster Systems based on the JS21 blade. The PowerPC 970MP variant (see section PowerPC 970). With 10,240 processors the Theoretical Peak Performance is just over 94 Tflop/s.

    Cell BE boards (QS21 and QS22) can be accommodated in the System Cluster 1350. So, potentially, one can build a hybrid system in this way with the Cell processors as computational accelerators.