The city of Melbourne in the Australian state of Victoria has developed
a reputation for innovations in the field of High Performance
Computing (HPC). Monash University (MU), CSIRO and the Australian
Synchrotron (AS) are research institutions at the forefront of this scientific
endeavor, and are home to a number of powerful HPC platforms.
MU, CSIRO and AS work with the Victorian Partnership for Advanced
Computing (VPAC) to run the Multi-modal Australian Sciences Imaging
and Visualization Environment, known as MASSIVE.
1. IBM Systems and Technology
Case Study
MASSIVE sheds new light
on complex research
Delivering faster data analysis and visualization from
hybrid architecture
The city of Melbourne in the Australian state of Victoria has developed
Overview a reputation for innovations in the field of High Performance
Computing (HPC). Monash University (MU), CSIRO and the Australian
The need
Synchrotron (AS) are research institutions at the forefront of this scien-
Monash University (MU) and Australian tific endeavor, and are home to a number of powerful HPC platforms.
Synchrotron (AS) and their research part-
ners needed a powerful, massively paral- MU, CSIRO and AS work with the Victorian Partnership for Advanced
lelized HPC system to process imaging Computing (VPAC) to run the Multi-modal Australian ScienceS Imaging
data and to increase the efficiency of and Visualization Environment, known as MASSIVE.
data-collection tasks by enabling near
real-time processing of collected data.
The Victorian Government supported the establishment of MASSIVE
The solution and the National Computational Infrastructure (NCI) supports
Implementation of two IBM System x® MASSIVE as a specialized facility for researchers across Australia.
iDataPlex® dx360 clusters with both
intelligent Intel® Xeon® processors and
NVIDIA GPUs at AS and MU. Growing data demands
Australian scientists have access to a range of high-resolution imaging
The benefit
instruments, which includes the Imaging and Medical Beamline at
Researchers now get near real-time pre- Australian Synchrotron. The MASSIVE partnership was formed to
views and analysis of CT, MRI and elec-
tron microscope scans, enabling them to help scientists extract the most from these instruments produce by pro-
ensure that they are capturing all the data viding a powerful, massively parallel HPC system optimized to process
they need within limited windows of time. imaging data.
“The Australian scientific community is fortunate to have access to a
range of amazing instruments, including the Australian Synchrotron,”
explains Wojtek James Goscinski, Coordinator of the MASSIVE project.
Over the past few years, there’s been a huge increase in the availability of
imaging equipment, such as new MRI and CT facilities and new genera-
tion instruments such as the Imaging and Medical Beamline at AS.
Researchers use these facilities to perform high-resolution 3D scans of
research samples, which can be anything from live organs to rock frag-
ments. The 3-dimensional images that are produced—called a “data
volume”—are enormous.
2. IBM Systems and Technology
Case Study
“In the past, getting meaningful results from a complex series of scans
could take weeks or even months to achieve,” says Goscinski. “Cutting
“IBM offered a high down the time it takes to process such crucial data can have a real impact
floating-point computa- on delivering new insights ahead of other research groups.”
tional performance to The MASSIVE partners wanted to help researchers get the most out of
power ratio for our IT increases in the prevalence and performance of imaging modalities. In
spend—which impressed order to achieve this, the partners had to develop a HPC platform tai-
lored to the specific demands of processing high-resolution data volumes.
us during tender.”
High performance, high visibility
—Wojtek James Goscinski
MASSIVE produced a detailed schematic of the HPC solution it
required, and put it out to tender. “We needed a massively parallelized
cluster, built on a hybrid GPU/CPU infrastructure,” says Goscinski. “As
publicly funded institutions, we had a duty to ensure that the IT solution
meets strict green IT requirements.
IBM offered a high floating-point computational performance to power
ratio for our IT spend—which impressed us during tender.”
The full solution comprises two linked clusters—MASSIVE1 and
MASSIVE2—managed as a single system.
Each MASSIVE cluster has 42 IBM System x iDataPlex dx360 servers.
Each of these iDataPlex nodes has two six-core Intel Xeon 5600 Series
processors running at 2.66 GHz (for a total of 504 cores per cluster)
and two NVIDIA M2070 GPUs (for a total of 84 GPUs per cluster).
The Intel Xeon processors provide industry-leading performance com-
bined with extreme energy-efficiency. iDataPlex is also a highly efficient
solution, offering a unique half-depth form factor that maximizes the
effect of cooling, enabling more processors to be packed reliably into a
smaller space.
The iDataPlex nodes can run Microsoft Windows Server or Linux
depending on the individual requirements of a simulation, with volume
reconstruction running in a Windows HPC environment and the core
services on Linux. IBM General Parallel File System (GPFS™) provides
high-performance parallelized access to data for both operating systems.
“The CT scan reconstruction algorithms that are used by imaging
scientists to create 3D volumes are well parallelized on a GPU,” explains
Goscinski. “The reconstruction algorithms actually run so quickly that
the challenge is getting data into the GPUs fast enough. We configured
MASSIVE with an optimal ratio of GPUs to file-system performance,
providing the best possible price-performance ratio.”
2
3. IBM Systems and Technology
Case Study
He adds: “We also use the GPUs in more conventional ways to perform
Solution components real-time or offline rendering for visualization projects. At the lower
end of the workload spectrum, we also offer an interactive desktop envi-
Hardware
ronment running on a individual nodes, with a whole range of tools that
● IBM System x® iDataPlex® dx360 researchers can use to process their data.”
class server
● IBM System x3650 class server
● Intel® Xeon® processors One of the most important functions of the MASSIVE system is its
● IBM System Storage® DS3500 Turbo ability to provide a real-time preview of scan data. “One of the major
● IBM System Storage SAN24B-4
Express
inefficiencies in high-resolution imaging experiment was that it was
● IBM System Networking RackSwitch difficult to know if you’d captured your data correctly,” says Goscinski.
G8124, G8100 “With the visualization capabilities of our IBM iDataPlex solution, we’ve
● Mellanox IS5200 QDR Switch
given researchers the chance to check that they’re collecting all the data
Software they want, allowing them to get maximum value from their allotted
● IBM General Parallel File System scanning slots.”
(GPFS™)
● Extreme Cloud Administration Throughout the implementation, MASSIVE worked closely with
Toolkit (xCAT)
● Linux
the IBM team. “We found our local IBM representative to be highly
motivated and professional,” says Goscinski. “The whole process went
Services smoothly, and we’re very satisfied with the results.”
● IBM ANZ STG Lab Services
Clear benefit
MASSIVE’s resources are shared between Australian Synchrotron,
CSIRO, Monash University and the VPAC. In addition, a portion of
the total resources was also purchased by the National Computational
Infrastructure – one of the three peak Australian supercomputing
facilities—for leading researchers across Australia. Computing time is
allocated to projects based on their scientific merit.
While researchers can use hundreds of thousands of computing hours,
some require only a few hours using a desktop interface, making
MASSIVE a highly flexible HPC solution.
“We support a lot of neuroimaging research, as well as engineering and
materials science. Scientists who use microscopy and electron microscopy
also use the system,” continues Goscinski. “We’re also getting increasing
demand from researchers across Australia in the fields of molecular
dynamics and astrophysics, who use MASSIVE’s parallelized GPUs to
run complex simulations.”
With MASSIVE’s support from the Victorian Government and the
National Computational Infrastructure, Australian researchers benefit
from the exchange of knowledge and ideas on the use and application of
massively parallel HPC.
“MASSIVE is a key specialized HPC resource in Australia,” says
Goscinski. “Data generated from imaging modalities across the country
can now be processed far more effectively, generating more significant
insights, faster.”
3