Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 949.389.6000 www.qlogic.com
Europe Headquarters QLogic (UK) LTD. Surrey Technology Centre 40 Occam Road Guildford Surrey GU2 7YG UK +44 (0)1483 295825
C A S E S T U D Y
Increasing cluster throughput would allow customers to run more
detailed models or simply get more modeling runs completed in less
time. The result, said Feldman, could be very significant for CD-adapco
customers, because some simulations take weeks to complete. The
QLogic interconnects could cut this time down to days. Not only would
this allow projects to be completed faster—more simulations could be
performed simultaneously, increasing the capabilities of the STAR-CD
application in clustered environments.
To verify their performance expectations and create sizing data for
customers to use, Feldman’s team configured a test cluster using
the new QLogic InfiniBand adapters connected through a SilverStorm
9080 multiprotocol fabric director. A 30-node test cluster was built
using IBM e326 servers running Linux. Each node has two dual-core
64-bit 2.4 GHz AMD Opteron processors, each with 2GB of memory
per core. A 6TB RAID storage system is connected to all the nodes in
the cluster through a separate GigE network.
Looking under the hood, QLogic InfiniPath adapters rev up
the test cluster
To measure performance, the team ran an aerodynamics automobile
model on an increasingly larger set of compute nodes. Said Feldman,
“The speed of the analysis continued to increase, indicating excellent
scalability beyond what has been seen in other clusters,” Feldman
said. “As we add nodes to clusters using other interconnects, the
inter-node links can become bottlenecks. But with the performance
of the QLogic interconnects, our experience shows that cluster
performance can scale linearly as we add nodes.”
Feldman attributes the improved cluster scalability to higher bandwidth
of the QLogic interconnects combined with their extremely low ping
pong and random ring latency. “Ping pong latency, which measures
communication between two nodes, was 1.6 microseconds lower
than other InfiniBand adapters. As for random ring communication—
which is measured between many nodes in a cluster—the latency
was an extremely low 1.29 microseconds.”
The new cluster zooms off to create new opportunities
According to Feldman, the performance of the QLogic-based cluster
approaches supercomputing levels at a very attractive price point.
“With QLogic InfiniPath adapters, our applications become accessible
to more companies because we have dramatically reduced the cost
of the compute platform. That creates new opportunities for our
company,” he said. “Whether our customers use our CFD software
to design the fastest race cars in the world or simply need the
fastest modeling capabilities available for other products, using
QLogic adapters is critical to getting precise results with low capital
equipment costs.”
As seen in this drawing, QLogic InfiniPath QLE7140 PCI adapters
interconnect through a SilverStorm 9080 multiprotocol fabric
director (now part of the QLogic product family) to 30 IBM
®
e326
compute nodes in two racks of an IBM 1350 System Cluster running
Linux. Each node has two dual-core 64- bit 2.4 GHz AMD Opteron
processors, each with 2GB of memory. A 6TB RAID storage system is
connected to the cluster over a dedicated GigE network.
CD-adapco
©2006 QLogic Corporation. all rights reserved. QLogic, the QLogic Logo, the Powered by QLogic Logo, InfiniPath are registered trademarks or trademarks of QLogic Corporation. all other brands and product names are trademarks or registered trademarks of
their respective owners. Information supplied by QLogic is believed to be accurate and reliable. QLogic Corporation assumes no responsibility for any errors in this brochure. QLogic Corporation reserves the right, without notice, to makes changes in product
design or specifications.
sN0130926-00 Rev a 11/06