“The analysis that consisted of 20,000 single processor jobs—the one that took the previous cluster 50 days to complete—now gets done in just five and a half days with the HP BladeSystem.” John Wofford, director of IT, Center for Computational Biology and Bioinformatics, Columbia University HP customer case study: High-performance computing Power and cooling HP ProLiant BL2x220c server blades HP ProCurve switches Industry: education and research Objective Deploy a single, large, general-purpose cluster while minimizing space, power and cooling requirements Approach Standardize on HP ProLiant BladeSystem server blades, HP ProCurve switches, and HP Integrated Lights-Out (iLO2) for remote management IT improvements • Hundreds of IT administrator hours saved on basic server management tasks • 10x performance improvement over next-fastest system • 15 to 20% performance improvement in jobs • 100x faster server updates •50% reduction in administrator visits to physical clusters •40% reduction in power consumption Business outcomes • $95,000 USD annual savings in power and cooling costs • $110,000 USD saved in rack and chiller costs compared to other vendor solutions • Expense of two full-time hires avoided using HP iLO2 • Enhanced ability to attract and retain top researchers and projects The power to solve medical mysteries As the H1N1 virus—commonly known as swine flu— spread, researchers at Columbia University wanted to map its precise genetic makeup. How, they wondered, might it have evolved from flu viruses in pandemics past? To find out, they needed to run extensive computational analyses—and they weren’t the only ones seeking high-performance computing time. At Columbia University’s Center for Computational Biology and Bioinformatics (C2B2), large computer clusters are busy unraveling mysteries such as the makeup of the H1N1 virus, how complex traits are inherited, how proteins and DNA interact, and how harmful cell mutations might be predicted. “The HP BladeSystem is saving us roughly $95,000 in energy costs in a year. The power numbers really sealed the deal.” John Wofford, director of IT, Center for Computational Biology and Bioinformatics, Columbia University The intent is to gain understanding and uncover insights that might lead toward better treatment and cures. But until recently, there wasn’t enough computer processor capacity to satisfy demand. For example, systems biology is a new field that takes into account complex interactions in whole biological systems. A batch of cellular network calculations in this field can contain upwards of 20,000 single processor jobs. Until recently, that calculation would have taken 50 days to complete using the largest computing cluster that C2B2 had. To deliver results faster, and continue to be able to attract and keep top researchers, more capacity was needed. The center already had a dozen specialized compute clusters. But according to John Wofford, director of IT at C2B2, the clusters no longer offered enough scalability and were a challenge to manage. “We hit a point with our old clusters where we had many more jobs to run than we could feasibly fit into the time we had to run them,” Wofford explains. Going big while reducing power consumption 40% Read the full Exponential Improvement in Center’s Computational Research with HP Servers The 464-node “Titan” cluster at Columbia University uses HP BladeSystem to boost performance by 10x and reduce annual power costs by $95,000.