InfiniPath only provides Host Channel Adapters with a 4-wide (1.25 GB/s) Infiniband link on the network side and connecting to a HyperTransport bus or PCI-Express at the computer side. For systems with AMD processors on board the HyperTransport option is particularly attractive because of the direct connection to the host's processors. This results in very low latencies for small messages. PathScale, the vendor of the InfiniPath HCAs quotes latencies as low as 1.29 µs. Obviously, this type of HCA cannot be used with systems based on non-AMD processors. For these systems the HCAs with PCI-Express can be used. They have slightly higher, but still low latency of 1.6 µs. The effective bandwidth is also high: a uni-directional bandwidth of ≅ 950 MB/s can be obtained using MPI for both types of HCA.
The InifiniPath HBAs do not contain processing power themselves. Any processing
associated with the communication is done by the host processor. According to
PathScale this is an advantage because the host processor is usually much
faster than the processors employed in switches. An evaluation report from
Sandia National Lab [12]
seems to corroborate this assertion. |