HPC Infiniband 40/56Gb vs 10GbE
http://www.mellanox.com/related-docs/case_studies/CS_Atlantic.Net.pdf
Processors now have many hyper-threaded cores and lots of memory and cache. Standard high performance disk technology has lazy-write caches and battery backup for reliability. Disks are stripped/parallelized to alleviate them as performance bottlenecks. This means network I/O for Internet traffic, replication, caching, and disk access will typically be the most substantial bottlenecks. 40/56GbIB at $~5.6/Gb/s is currently lower than 10GbE at ~$11.5/Gb/s giving it better value. You can also run TCP/IP over IB too, IPoIB. There is a 40GbE switch for ~$208/Gb and even 100GbE, but nothing much available in 40Gb and nothing I could find in 100Gb. You can't get the 40/56Gb/s BW or lower latency of InfiniBand RDMA on 10GbE. RDMA supposedly exists on 10GbE, but after looking on Intel's site, I only found one card supporting it.
Here is a paper which compares the performance of a custom key/value store, memcached, and Redis over 10GbE and InfiniBand. In the conclusion they indicated they utilized InfiniBand and RDMA to bypass CPU and kernel and achieved an order of magnitude performance over 10GbE. It should be noted that they only used a 20GbIB and not the faster 40/56GbIB which is now standard and lower cost than 10GbE.
This nyu.edu paper also discusses some very interesting/important implementation details.
In mid 2013 the Metrox TX6100 InfiniBand switch was released which supports long-haul 56Gb/s InifiniBand w/RDMA up to 100km or ~62 miles for DR. The 12 port version is available and costs about ~$50/Gb.
Performance testing comparison Amazon, et. al.
Microsoft Azure HPC has InifiniBand fabric connecting compute nodes.
InfiniBand growth
Ethernet RoCE has many of the benefits of Infiniband including RDMA
Processors now have many hyper-threaded cores and lots of memory and cache. Standard high performance disk technology has lazy-write caches and battery backup for reliability. Disks are stripped/parallelized to alleviate them as performance bottlenecks. This means network I/O for Internet traffic, replication, caching, and disk access will typically be the most substantial bottlenecks. 40/56GbIB at $~5.6/Gb/s is currently lower than 10GbE at ~$11.5/Gb/s giving it better value. You can also run TCP/IP over IB too, IPoIB. There is a 40GbE switch for ~$208/Gb and even 100GbE, but nothing much available in 40Gb and nothing I could find in 100Gb. You can't get the 40/56Gb/s BW or lower latency of InfiniBand RDMA on 10GbE. RDMA supposedly exists on 10GbE, but after looking on Intel's site, I only found one card supporting it.
Here is a paper which compares the performance of a custom key/value store, memcached, and Redis over 10GbE and InfiniBand. In the conclusion they indicated they utilized InfiniBand and RDMA to bypass CPU and kernel and achieved an order of magnitude performance over 10GbE. It should be noted that they only used a 20GbIB and not the faster 40/56GbIB which is now standard and lower cost than 10GbE.
This nyu.edu paper also discusses some very interesting/important implementation details.
In mid 2013 the Metrox TX6100 InfiniBand switch was released which supports long-haul 56Gb/s InifiniBand w/RDMA up to 100km or ~62 miles for DR. The 12 port version is available and costs about ~$50/Gb.
Performance testing comparison Amazon, et. al.
Microsoft Azure HPC has InifiniBand fabric connecting compute nodes.
InfiniBand growth
Ethernet RoCE has many of the benefits of Infiniband including RDMA
Comments