High Performance Computing – The SuperComputing Conference being held in New Orleans is an interesting event if you have high performance computing (HPC) needs, like those in broadcast, post production and some ProAV applications.
This is where the latest CPU, GPU and high-speed connectivity hardware is showcased, so a quick summary of some top items is in order.
Dell unveiled the purpose-built PowerEdge C4130, the only Intel Xeon E5-2600v3 1U server to support four GPU accelerators to achieve over 7.2 teraflops on a single 1U server and which has a performance/watt ratio of up to 4.17 gigaflops per watt. These metric help datacenters save on power consumption and floor space.
The PowerEdge C4130 adds up to two Intel Xeon E5-2600v3 processors, 16 DIMMs of DDR4 memory and up to two 1.8″ SATA SSD boot drives in just 1U of rack space. With up to four 300W Intel Xeon Phi coprocessors or NVIDIA Tesla GPU accelerators, including the new NVIDIA Tesla K80 dual-GPU accelerator, the PowerEdge C4130 enables seamless analysis and execution of data on even the most complex and intensive computing workloads.
Dell also announced Lustre software to bring the benefits of an HPC file system – including advanced modeling, simulation, and data analysis – to a wider audience of enterprise users. The solution is built on Dell’s PowerEdge servers and Dell Storage MD3460 arrays and MD3060e dense enclosures to provide redundant servers and failover-enabled storage. It is one of the first solutions with validation and support for Intel EE for Lustre software, providing customers with simplified installation, configuration, monitoring and overall management features that are backed by both Dell and Intel.
Intel used the event to discuss the next generation of the Intel Xeon Phi coprocessor, code-named Knights Hill, and new architectural and performance details for Intel Omni-Path Architecture, a new high-speed interconnect technology optimized for HPC deployments.
Intel disclosed that its future, third-generation Intel Xeon Phi product family, code-named Knights Hill, will be built using Intel’s 10nm process technology and will integrate Intel Omni-Path Fabric technology. Knights Hill will follow the upcoming Knights Landing product, with first commercial systems based on Knights Landing expected to begin shipping next year.
Industry investment in Intel Xeon Phi co-processors continues to grow with more than 50 providers expected to offer systems built using the new co-processor version of Knights Landing, with many more systems using the co-processor PCIe card version of the product.
Intel disclosed that the Intel Omni-Path Architecture is expected to offer 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives. The Intel Omni-Path Architecture will use a 48 port switch chip to deliver greater port density and system scaling compared to the current 36 port InfiniBand alternatives. Providing up to 33 percent more nodes per switch chip is expected to reduce the number of switches required, simplifying system design and reducing infrastructure costs at every scale.
Echostreams Innovative Solutions
Fast connectivity with memory and other servers in HPC is essential too. Here, Echostreams Innovative Solutions announced the availability of the long-awaited Griffin2-24 high-performance high-availability flash server for early customer sampling.
It is based on the latest Intel Grantley platform supporting dual E5-2600v3 Haswell processors and includes Mellanox Technologies’ ConnectX-3 FDR 56Gbps InfiniBand adapter cards and 12Gbps SAS (Serial Attached SCSI) connectivity, to offer an all-flash redundant server that is one of the most powerful Cluster-in-a-Box solutions on the market today.
This solution is what Echostreams calls a redundant tier-zero IB storage platform. For tier-1 storage, the eDrawer4060J 12G dual-expander JBOD with 60 HGST 3.5-inch 7,200RPM high-capacity SAS HDDs can be attached to the Griffin2-24 offering more than half-a-petabyte of tiered storage.
In the HPC field, there is also a need to do remote I/O, GPGPU computing, and Disaggregated Computing Architectures. Here, PCI Express (PCIe) Gen 3 switching over fiber optics can be the answer.
At SC14, Samatec showed off this solution using a PCIe Gen 3 card with unique PCIEO Series Active Optical cable assembly. This achieves x8 PCIe Gen 3 data rates (8 GT/s) over 100 meters.
The PCIEO Series is compatible with the iPass form-factor, and is a direct replacement for passive or active copper cables and supports idle and sideband signaling. A miniature optical transceiver is located inside the plug at either end and performs the electrical-to-optical signal conversion over a thin optical fiber. This allows for a significant cable length boost in the application. – Chris Chinnock