SlideShare une entreprise Scribd logo
1  sur  67
Designing HPC, Deep Learning, and Cloud Middleware
for Exascale Systems
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
Keynote Talk at HPCAC Stanford Conference (Feb ‘18)
by
HPCAC-Stanford (Feb ’18) 2Network Based Computing Laboratory
High-End Computing (HEC): Towards Exascale
Expected to have an ExaFlop system in 2019-2020!
100 PFlops in
2016
1 EFlops in 2019-
2020?
HPCAC-Stanford (Feb ’18) 3Network Based Computing Laboratory
Big Data
(Hadoop, Spark,
HBase,
Memcached,
etc.)
Deep Learning
(Caffe, TensorFlow, BigDL,
etc.)
HPC
(MPI, RDMA,
Lustre, etc.)
Increasing Usage of HPC, Big Data and Deep Learning
Convergence of HPC, Big
Data, and Deep Learning!
Increasing Need to Run these
applications on the Cloud!!
HPCAC-Stanford (Feb ’18) 4Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Deep Learning
– Caffe and CNTK
• Cloud for HPC
– Virtualization with SR-IOV and Containers
Focus of Today’s Talk: HPC, Deep Learning, and Cloud
Focus of Tomorrow’s Talk: HPC + Big Data, Deep Learning, and Cloud
HPCAC-Stanford (Feb ’18) 5Network Based Computing Laboratory
• Big Data/Enterprise/Commercial Computing
– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached for Web 2.0
– Kafka for Stream Processing
– Swift
• Deep Learning
– TensorFlow with gRPC
• Cloud for Big Data Stacks
– Virtualization with SR-IOV and Containers
Focus of Tomorrow’s Talk: HPC, BigData, Deep Learning and
Cloud
HPCAC-Stanford (Feb ’18) 6Network Based Computing Laboratory
Parallel Programming Models Overview
P1 P2 P3
Shared Memory
P1 P2 P3
Memory Memory Memory
P1 P2 P3
Memory Memory Memory
Logical shared memory
Shared Memory Model
SHMEM, DSM
Distributed Memory Model
MPI (Message Passing Interface)
Partitioned Global Address Space (PGAS)
Global Arrays, UPC, Chapel, X10, CAF, …
• Programming models provide abstract machine models
• Models can be mapped on different types of systems
– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.
• PGAS models and Hybrid MPI+PGAS models are gradually receiving
importance
HPCAC-Stanford (Feb ’18) 7Network Based Computing Laboratory
Partitioned Global Address Space (PGAS) Models
• Key features
- Simple shared memory abstractions
- Light weight one-sided communication
- Easier to express irregular communication
• Different approaches to PGAS
- Languages
• Unified Parallel C (UPC)
• Co-Array Fortran (CAF)
• X10
• Chapel
- Libraries
• OpenSHMEM
• UPC++
• Global Arrays
HPCAC-Stanford (Feb ’18) 8Network Based Computing Laboratory
Hybrid (MPI+PGAS) Programming
• Application sub-kernels can be re-written in MPI/PGAS
based on communication characteristics
• Benefits:
– Best of Distributed Computing Model
– Best of Shared Memory Computing Model
Kernel 1
MPI
Kernel 2
MPI
Kernel 3
MPI
Kernel N
MPI
HPC Application
Kernel 2
PGAS
Kernel N
PGAS
HPCAC-Stanford (Feb ’18) 9Network Based Computing Laboratory
Supporting Programming Models for Multi-Petaflop and
Exaflop Systems: Challenges
Programming Models
MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies
(InfiniBand, 40/100GigE,
Aries, and Omni-Path)
Multi-/Many-core
Architectures
Accelerators
(GPU and FPGA)
Middleware
Co-Design
Opportunities
and
Challenges
across Various
Layers
Performance
Scalability
Resilience
Communication Library or Runtime for Programming Models
Point-to-point
Communication
Collective
Communication
Energy-
Awareness
Synchronization
and Locks
I/O and
File Systems
Fault
Tolerance
HPCAC-Stanford (Feb ’18) 10Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)
– Scalable job start-up
• Scalable Collective communication
– Offload
– Non-blocking
– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)
– Multiple end-points per node
• Support for efficient multi-threading
• Integrated Support for GPGPUs and Accelerators
• Fault-tolerance/resiliency
• QoS support for communication and I/O
• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)
• Virtualization
• Energy-Awareness
Broad Challenges in Designing Communication Middleware for (MPI+X) at
Exascale
HPCAC-Stanford (Feb ’18) 11Network Based Computing Laboratory
• Extreme Low Memory Footprint
– Memory per core continues to decrease
• D-L-A Framework
– Discover
• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job
• Node architecture, Health of network and node
– Learn
• Impact on performance and scalability
• Potential for failure
– Adapt
• Internal protocols and algorithms
• Process mapping
• Fault-tolerance solutions
– Low overhead techniques while delivering performance, scalability and fault-tolerance
Additional Challenges for Designing Exascale Software Libraries
HPCAC-Stanford (Feb ’18) 12Network Based Computing Laboratory
Overview of the MVAPICH2 Project
• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,875 organizations in 85 countries
– More than 445,000 (> 0.4 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘17 ranking)
• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 12th, 368,928-core (Stampede2) at TACC
• 17th, 241,108-core (Pleiades) at NASA
• 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->
– Sunway TaihuLight (1st in Jun’17, 10M cores, 100 PFlops)
HPCAC-Stanford (Feb ’18) 13Network Based Computing Laboratory
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
500000
Sep-04
Feb-05
Jul-05
Dec-05
May-06
Oct-06
Mar-07
Aug-07
Jan-08
Jun-08
Nov-08
Apr-09
Sep-09
Feb-10
Jul-10
Dec-10
May-11
Oct-11
Mar-12
Aug-12
Jan-13
Jun-13
Nov-13
Apr-14
Sep-14
Feb-15
Jul-15
Dec-15
May-16
Oct-16
Mar-17
Aug-17
Jan-18
NumberofDownloads
Timeline
MV0.9.4
MV20.9.0
MV20.9.8
MV21.0
MV1.0
MV21.0.3
MV1.1
MV21.4
MV21.5
MV21.6
MV21.7
MV21.8
MV21.9
MV22.1
MV2-GDR2.0b
MV2-MIC2.0
MV2-GDR2.3a
MV2-X2.3b
MV2Virt2.2
MV22.3rc1
MVAPICH2 Release Timeline and Downloads
HPCAC-Stanford (Feb ’18) 14Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface
(MPI)
PGAS
(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication Runtime
Diverse APIs and Mechanisms
Point-to-
point
Primitives
Collectives
Algorithms
Energy-
Awareness
Remote
Memory
Access
I/O and
File Systems
Fault
Tolerance
Virtualization
Active
Messages
Job Startup
Introspection
& Analysis
Support for Modern Networking Technology
(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures
(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODP
SR-
IOV
Multi
Rail
Transport Mechanisms
Shared
Memory
CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
XPMEM*
HPCAC-Stanford (Feb ’18) 15Network Based Computing Laboratory
• Released on 02/19/2018
• Major Features and Enhancements
– Enhanced performance for Allreduce, Reduce_scatter_block, Allgather, Allgatherv through new algorithms
– Enhance support for MPI_T PVARs and CVARs
– Improved job startup time for OFA-IB-CH3, PSM-CH3, and PSM2-CH3
– Support to automatically detect IP address of IB/RoCE interfaces when RDMA_CM is enabled without relying on mv2.conf file
– Enhance HCA detection to handle cases where node has both IB and RoCE HCAs
– Automatically detect and use maximum supported MTU by the HCA
– Added logic to detect heterogeneous CPU/HFI configurations in PSM-CH3 and PSM2-CH3 channels
– Enhanced intra-node and inter-node tuning for PSM-CH3 and PSM2-CH3 channels
– Enhanced HFI selection logic for systems with multiple Omni-Path HFIs
– Enhanced tuning and architecture detection for OpenPOWER, Intel Skylake and Cavium ARM (ThunderX) systems
– Added 'SPREAD', 'BUNCH', and 'SCATTER' binding options for hybrid CPU binding policy
– Rename MV2_THREADS_BINDING_POLICY to MV2_HYBRID_BINDING_POLICY
– Added support for MV2_SHOW_CPU_BINDING to display number of OMP threads
– Update to hwloc version 1.11.9
MVAPICH2 2.3rc1
HPCAC-Stanford (Feb ’18) 16Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient communication
– Scalable Start-up
– Optimized Collectives using SHArP and Multi-Leaders
– Optimized CMA-based Collectives
– Optimized XPMEM-based Collectives
– Dynamic and Adaptive Communication and Tag Matching
– Performance Engineering with MPI-T Support
– Integrated Network Analysis and Monitoring
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ’18) 17Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2 Small Message Latency
Message Size (bytes)
Latency(us)
1.11
1.19
0.98
1.15
1.04
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
0
20
40
60
80
100
120
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-5-EDR
Omni-Path
Large Message Latency
Message Size (bytes)
Latency(us)
HPCAC-Stanford (Feb ’18) 18Network Based Computing Laboratory
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-5-EDR
Omni-Path
Bidirectional Bandwidth
Bandwidth
(MBytes/sec)
Message Size (bytes)
22,564
12,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000 Unidirectional Bandwidth
Bandwidth
(MBytes/sec)
Message Size (bytes)
12,590
3,373
6,356
12,358
12,366
HPCAC-Stanford (Feb ’18) 19Network Based Computing Laboratory
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
TimeTaken(Seconds)
Number of Processes
MPI_Init & Hello World - Oakforest-PACS
Hello World (MVAPICH2-2.3a)
MPI_Init (MVAPICH2-2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)
• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)
• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)
• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available since MVAPICH2-2.3a and as patch for SLURM 15, 16, and 17
HPCAC-Stanford (Feb ’18) 20Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
13%
Mesh Refinement Time of MiniAMR
Advanced Allreduce Collective Designs Using SHArP
12%
0
0.1
0.2
0.3
0.4
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
Avg DDOT Allreduce time of HPCG
SHArP Support is available
since MVAPICH2 2.3a
M. Bayatpour, S. Chakraborty, H. Subramoni, X.
Lu, and D. K. Panda, Scalable Reduction
Collectives with Data Partitioning-based Multi-
Leader Design, SuperComputing '17.
HPCAC-Stanford (Feb ’18) 21Network Based Computing Laboratory
Performance of MPI_Allreduce On Stampede2 (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Latency(us)
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
0
200
400
600
800
1000
1200
1400
1600
1800
2000
8K 16K 32K 64K 128K 256K
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
OSU Micro Benchmark 64 PPN
2.4X
• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4X
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based
Multi-Leader Design, SuperComputing '17.
Available in MVAPICH2-X 2.3b
HPCAC-Stanford (Feb ’18) 22Network Based Computing Laboratory
Optimized CMA-based Collectives for Large Messages
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
4M
Message Size
KNL (2 Nodes, 128 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
Latency(us)
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
Message Size
KNL (4 Nodes, 256 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
Message Size
KNL (8 Nodes, 512 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
• Significant improvement over existing implementation for Scatter/Gather with
1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)
• New two-level algorithms for better scalability
• Improved performance for other collectives (Bcast, Allgather, and Alltoall)
~ 2.5x
Better
~ 3.2x
Better
~ 4x
Better
~ 17x
Better
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI
Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist
Performance of MPI_Gather on KNL nodes (64PPN)
Available in MVAPICH2-X 2.3b
HPCAC-Stanford (Feb ’18) 23Network Based Computing Laboratory
Shared Address Space (XPMEM)-based Collectives Design
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-Opt
OSU_Allreduce (Broadwell 256 procs)
• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2
• Offloaded computation/communication to peers ranks in reduction collective operation
• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce
73.2
1.8X
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-Opt
OSU_Reduce (Broadwell 256 procs)
4X
36.1
37.9
16.8
J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction
Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.
Will be available in future
HPCAC-Stanford (Feb ’18) 24Network Based Computing Laboratory
Application-Level Benefits of XPMEM-Based Collectives
MiniAMR (Broadwell, ppn=16)
• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce
• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for
MiniAMR application kernel
0
200
400
600
800
28 56 112 224
ExecutionTime(s)
No. of Processes
IMPI-2017v1.132
MVAPICH2-2.3b
MVAPICH2-Opt
CNTK AlexNet Training
(Broadwell, B.S=default, iteration=50, ppn=28)
0
20
40
60
80
16 32 64 128 256
ExecutionTime(s)
No. of Processes
IMPI-2017v1.132
MVAPICH2-2.3b
MVAPICH2-Opt20%
9%
27%
15%
HPCAC-Stanford (Feb ’18) 25Network Based Computing Laboratory
Dynamic and Adaptive MPI Point-to-point Communication Protocols
Process on Node 1 Process on Node 2
Eager Threshold for Example Communication Pattern with Different Designs
0 1 2 3
4 5 6 7
Default
16 KB 16 KB 16 KB 16 KB
0 1 2 3
4 5 6 7
Manually Tuned
128 KB 128 KB 128 KB 128 KB
0 1 2 3
4 5 6 7
Dynamic + Adaptive
32 KB 64 KB 128 KB 32 KB
H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper
0
200
400
600
128 256 512 1K
WallClockTime(seconds)
Number of Processes
Execution Time of Amber
Default Threshold=17K Threshold=64K
Threshold=128K Dynamic Threshold
0
5
10
128 256 512 1K
RelativeMemory
Consumption
Number of Processes
Relative Memory Consumption of Amber
Default Threshold=17K Threshold=64K
Threshold=128K Dynamic Threshold
Default Poor overlap; Low memory requirement Low Performance; High Productivity
Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity
Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity
Process Pair Eager Threshold (KB)
0 – 4 32
1 – 5 64
2 – 6 128
3 – 7 32
Desired Eager Threshold
Will be available
in future
HPCAC-Stanford (Feb ’18) 26Network Based Computing Laboratory
Dynamic and Adaptive Tag Matching
Normalized Total Tag Matching Time at 512 Processes
Normalized to Default (Lower is Better)
Normalized Memory Overhead per Process at 512 Processes
Compared to Default (Lower is Better)
Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]
Challenge
Tag matching is a significant
overhead for receivers
Existing Solutions are
- Static and do not adapt
dynamically to
communication pattern
- Do not consider memory
overhead
Solution
A new tag matching design
- Dynamically adapt to
communication patterns
- Use different strategies for
different ranks
- Decisions are based on the
number of request object
that must be traversed
before hitting on the
required one
Results
Better performance than
other state-of-the art tag-
matching schemes
Minimum memory
consumption
Will be available in future
MVAPICH2 releases
HPCAC-Stanford (Feb ’18) 27Network Based Computing Laboratory
● Enhance existing support for MPI_T in MVAPICH2 to expose a richer
set of performance and control variables
● Get and display MPI Performance Variables (PVARs) made available by
the runtime in TAU
● Control the runtime’s behavior via MPI Control Variables (CVARs)
● Introduced support for new MPI_T based CVARs to MVAPICH2
○ MPIR_CVAR_MAX_INLINE_MSG_SZ,
MPIR_CVAR_VBUF_POOL_SIZE,
MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE
● TAU enhanced with support for setting MPI_T CVARs in a non-
interactive mode for uninstrumented applications
Performance Engineering Applications using MVAPICH2 and TAU
VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf
S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU,
Euro MPI, 2017 [Best Paper Award]
HPCAC-Stanford (Feb ’18) 28Network Based Computing Laboratory
Overview of OSU INAM
• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the
MPI runtime
– http://mvapich.cse.ohio-state.edu/tools/osu-inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• OSU INAM v0.9.2 released on 10/31/2017
• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes
• Improve database insert times by using 'bulk inserts‘
• Capability to look up list of nodes communicating through a network link
• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with
MVAPICH2-X 2.3b
• “Best practices “ guidelines for deploying OSU INAM on different clusters
• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication
– Point-to-Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
HPCAC-Stanford (Feb ’18) 29Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters
• Visualize traffic pattern on different links
• Quickly identify congested links/links in error state
• See the history unfold – play back historical state of the network
Comet@SDSC --- Clustered View
(1,879 nodes, 212 switches, 4,377 network links)
Finding Routes Between Nodes
HPCAC-Stanford (Feb ’18) 30Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view
• Show different network metrics (load, error, etc.) for any live job
• Play back historical data for completed jobs to identify bottlenecks
• Node level view - details per process or per node
• CPU utilization for each rank/node
• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)
• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view
• Classify data flowing over a network link at
different granularity in conjunction with
MVAPICH2-X 2.2rc1
• Job level and
• Process level
HPCAC-Stanford (Feb ’18) 31Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ’18) 32Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI
– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF
– Available with since 2012 (starting with MVAPICH2-X 1.9)
– http://mvapich.cse.ohio-state.edu
HPCAC-Stanford (Feb ’18) 33Network Based Computing Laboratory
Application Level Performance with Graph500 and Sort
Graph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,
International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,
Int'l Conference on Parallel Processing (ICPP '12), September 2012
0
5
10
15
20
25
30
35
4K 8K 16K
Time(s)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design
• 8,192 processes
- 2.4X improvement over MPI-CSR
- 7.6X improvement over MPI-Simple
• 16,384 processes
- 1.5X improvement over MPI-CSR
- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned
Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Time(seconds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort
Application
• 4,096 processes, 4 TB Input Size
- MPI – 2408 sec; 0.16 TB/min
- Hybrid – 1172 sec; 0.36 TB/min
- 51% improvement over MPI-design
HPCAC-Stanford (Feb ’18) 34Network Based Computing Laboratory
Optimized OpenSHMEM with AVX and MCDRAM: Application Kernels
Evaluation
Heat Image Kernel
• On heat diffusion based kernels AVX-512 vectorization showed better performance
• MCDRAM showed significant benefits on Heat-Image kernel for all process counts.
Combined with AVX-512 vectorization, it showed up to 4X improved performance
1
10
100
1000
16 32 64 128
Time(s)
No. of processes
KNL (Default)
KNL (AVX-512)
KNL (AVX-512+MCDRAM)
Broadwell
Heat-2D Kernel using Jacobi method
0.1
1
10
100
16 32 64 128
Time(s)
No. of processes
KNL (Default)
KNL (AVX-512)
KNL (AVX-512+MCDRAM)
Broadwell
HPCAC-Stanford (Feb ’18) 35Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
– CUDA-aware MPI
– Optimized GPUDirect RDMA (GDR) Support
– Support for Streaming Applications
• Optimized MVAPICH2 for OpenPower and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ’18) 36Network Based Computing Laboratory
At Sender:
At Receiver:
MPI_Recv(r_devbuf, size, …);
inside
MVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
HPCAC-Stanford (Feb ’18) 37Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases
• Support for MPI communication from NVIDIA GPU device memory
• High performance RDMA-based inter-node point-to-point communication
(GPU-GPU, GPU-Host and Host-GPU)
• High performance intra-node point-to-point communication for multi-GPU
adapters/node (GPU-GPU, GPU-Host and Host-GPU)
• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node
• Optimized and tuned collectives for GPU device buffers
• MPI datatype support for point-to-point and collective communication from
GPU device buffers
• Unified memory
HPCAC-Stanford (Feb ’18) 38Network Based Computing Laboratory
0
2000
4000
6000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3a
0
1000
2000
3000
4000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3a
0
10
20
30 0
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
MV2-(NO-GDR) MV2-GDR-2.3a
MVAPICH2-GDR-2.3a
Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPU
Mellanox Connect-X4 EDR HCA
CUDA 9.0
Mellanox OFED 4.0 with GPU-Direct-RDMA
10x
9x
Optimized MVAPICH2-GDR Design
1.88us
11X
HPCAC-Stanford (Feb ’18) 39Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)
• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768
MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768
MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
64K Particles 256K Particles
2X
2X
HPCAC-Stanford (Feb ’18) 40Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96NormalizedExecutionTime
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32
NormalizedExecutionTime
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes
• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data
Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content
/tasks/operational/meteoSwiss/
HPCAC-Stanford (Feb ’18) 41Network Based Computing Laboratory
• Streaming applications on GPU clusters
– Using a pipeline of broadcast operations to move host-
resident data from a single source—typically live— to
multiple GPU-based computing sites
– Existing schemes require explicitly data movements between
Host and GPU memories
Poor performance and breaking the pipeline
• IB hardware multicast + Scatter-List
– Efficient heterogeneous-buffer broadcast operation
• CUDA Inter-Process Communication (IPC)
– Efficient intra-node topology-aware broadcast
operations for multi-GPU systems
• Available MVAPICH2-GDR 2.3a!
High-Performance Heterogeneous Broadcast for Streaming Applications
Node N
IB
HCA
IB
HCA
CPU
GPU
Source
IB
Switch
GPU
CPU
Node 1
Multicast steps
C
Data
C
IB SL step
Data
IB
HCA
GPU
CPU
Data
C
Node N
Node 1
IB
Switch
GPU 0 GPU 1 GPU N
GPU
CPU
Source
GPU
CPU
CPU
Multicast steps
IPC-based cudaMemcpy
(Device<->Device)
3
Designing High Performance Heterogeneous Broadcast for Streaming Applications on
GPU Clusters. C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh , B. Elton, and D. K.
Panda, SBAC-PAD'16, Oct 2016.
HPCAC-Stanford (Feb ’18) 42Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ’18) 43Network Based Computing Laboratory
0
1
2
3
4
0 1 2 4 8 16 32 64 256 512 1K 2K 4K
Latency(us)
MVAPICH2
Inter-node Point-to-Point Performance on OpenPower
0
5000
10000
15000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
Bandwidth(MB/s)
MVAPICH2
0
10000
20000
30000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
BidirectionalBandwidth
MVAPICH2
Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA.
0
50
100
150
200
8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
MVAPICH2
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
HPCAC-Stanford (Feb ’18) 44Network Based Computing Laboratory
0
1
2
3
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Latency(us)
MVAPICH2
Intra-node Point-to-point Performance on ARMv8
0
1000
2000
3000
4000
5000
Bandwidth(MB/s)
MVAPICH2
0
2000
4000
6000
8000
10000
BidirectionalBandwidth
MVAPICH2
Platform: ARMv8 (aarch64) MIPS processor with 96 cores dual-socket CPU. Each socket contains 48 cores.
0
200
400
600
8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
MVAPICH2
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
Available since
MVAPICH2 2.3a
0.74 microsec
(4 bytes)
HPCAC-Stanford (Feb ’18) 45Network Based Computing Laboratory
0
10
20
30
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
Message Size (Bytes)
INTRA-NODE LATENCY (SMALL)
INTRA-SOCKET(NVLINK) INTER-SOCKET
MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)
0
10
20
30
40
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Bandwidth(GB/sec)
Message Size (Bytes)
INTRA-NODE BANDWIDTH
INTRA-SOCKET(NVLINK) INTER-SOCKET
0
5
10
15
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Bandwidth(GB/sec)
Message Size (Bytes)
INTER-NODE BANDWIDTH
Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and EDR InfiniBand Inter-connect
0
200
400
16K 32K 64K 128K256K512K 1M 2M 4M
Latency(us)
Message Size (Bytes)
INTRA-NODE LATENCY (LARGE)
INTRA-SOCKET(NVLINK) INTER-SOCKET
20
22
24
26
28
30
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
Message Size (Bytes)
INTER-NODE LATENCY (SMALL)
0
100
200
300
400
500
Latency(us)
Message Size (Bytes)
INTER-NODE LATENCY (LARGE)
Intra-node Bandwidth: 33.9 GB/sec (NVLINK)Intra-node Latency: 14.6 us (without GPUDirectRDMA)
Inter-node Latency: 23.8 us (without GPUDirectRDMA) Inter-node Bandwidth: 11.9 GB/sec (EDR)
Available in MVAPICH2-GDR 2.3a
HPCAC-Stanford (Feb ’18) 46Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ’18) 47Network Based Computing Laboratory
Application Scalability on Skylake and KNL
MiniFE (1300x1300x1300 ~ 910 GB)
Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K
0
50
100
150
2048 4096 8192
ExecutionTime(s)
No. of Processes (KNL: 64ppn)
MVAPICH2
0
20
40
60
2048 4096 8192
ExecutionTime(s)
No. of Processes (Skylake: 48ppn)
MVAPICH2
0
500
1000
1500
48 96 192 384 768
No. of Processes (Skylake: 48ppn)
MVAPICH2
NEURON (YuEtAl2012)
Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b
0
1000
2000
3000
4000
64 128 256 512 1024 2048 4096
No. of Processes (KNL: 64ppn)
MVAPICH2
0
500
1000
1500
68 136 272 544 1088 2176 4352
No. of Processes (KNL: 68ppn)
MVAPICH2
0
1000
2000
48 96 192 384 768 1536 3072
No. of Processes (Skylake: 48ppn)
MVAPICH2
Cloverleaf (bm64) MPI+OpenMP,
NUM_OMP_THREADS = 2
HPCAC-Stanford (Feb ’18) 48Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 lammps wrf2 tera_tf lu
ExecutionTimein(S)
Intel MPI 18.0.0
MVAPICH2 2.3 rc1
2%
6%
1%
6%
Performance of SPEC MPI 2007 Benchmarks (KNL + Omni-Path)
Mvapich2 outperforms Intel MPI by up to 10%
448 processes
on 7 KNL nodes of
TACC Stampede2
(64 ppn)
10%
2%
4%
HPCAC-Stanford (Feb ’18) 49Network Based Computing Laboratory
0
20
40
60
80
100
120
milc leslie3d pop2 lammps wrf2 GaP tera_tf lu
ExecutionTimein(S)
Intel MPI 18.0.0
MVAPICH2 2.3 rc1
2%
4%
Performance of SPEC MPI 2007 Benchmarks (Skylake + Omni-Path)
MVAPICH2 outperforms Intel MPI by up to 38%
480 processes
on 10 Skylake nodes
of TACC Stampede2
(48 ppn)
0% 1%
0%
-4%
38%
-3%
HPCAC-Stanford (Feb ’18) 50Network Based Computing Laboratory
• MPI runtime has many parameters
• Tuning a set of parameters can help you to extract higher performance
• Compiled a list of such contributions through the MVAPICH Website
– http://mvapich.cse.ohio-state.edu/best_practices/
• Initial list of applications
– Amber
– HoomDBlue
– HPCG
– Lulesh
– MILC
– Neuron
– SMG2000
• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.
• We will link these results with credits to you.
Compilation of Best Practices
HPCAC-Stanford (Feb ’18) 51Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC
– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
HPCAC-Stanford (Feb ’18) 52Network Based Computing Laboratory
• Deep Learning frameworks are a different game
altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State-of-the-art
– cuDNN, cuBLAS, NCCL --> scale-up performance
– CUDA-Aware MPI --> scale-out performance
• For small and medium message sizes only!
• Proposed: Can we co-design the MPI runtime (MVAPICH2-
GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large-Message Communication (Reductions)
– What application co-designs are needed to exploit
communication-runtime co-designs?
Deep Learning: New Challenges for MPI Runtimes
Scale-upPerformance
Scale-out Performance
cuDNN
NCCL
gRPC
Hadoop
Proposed
Co-Designs
MPI
cuBLAS
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU
Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
HPCAC-Stanford (Feb ’18) 53Network Based Computing Laboratory
• NCCL 1.x had some limitations
– Only worked for a single node; no scale-out on multiple nodes
– Degradation across IOH (socket) for scale-up (within a node)
• We propose optimized MPI_Bcast design that exploits NCCL [1]
– Communication of very large GPU buffers
– Scale-out on large number of dense multi-GPU nodes
Efficient Broadcast: MVAPICH2-GDR and NCCL
1. A. A. Awan, K. Hamidouche, A. Venkatesh, and D. K. Panda, Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep
Learning. In Proceedings of the 23rd European MPI Users' Group Meeting (EuroMPI 2016). [Best Paper Nominee]
2. A. A. Awan, C-H. Chu, H. Subramoni, and D. K. Panda. Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters:
MPI or NCCL?, arXiv ’17 (https://arxiv.org/abs/1707.09414)
1
32
4
Internode Comm. (Knomial)
1 2
CPU
PLX
3 4
PLX
Intranode Comm.
(NCCL Ring)
Ring Direction
0
5
10
15
20
25
30
2 4 8 32 64 128
TrainingTime(seconds)
No. of GPUs
MV2-GDR-NCCL MV2-GDR-Opt
• Hierarchical Communication that efficiently exploits:
– CUDA-Aware MPI_Bcast in MV2-GDR
– NCCL Broadcast for intra-node transfers
• Can pure MPI-level designs be done that achieve similar
or better performance than NCCL-based approach? [2]
0
5
10
15
20
25
30
2 4 8 16 32 64
TrainingTime(seconds)
No. of GPUs
MV2-GDR MV2-GDR-NCCL
VGG Training with CNTK
HPCAC-Stanford (Feb ’18) 54Network Based Computing Laboratory
0
100
200
300
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Reduce – 192 GPUs
Large Message Optimized Collectives for Deep Learning
0
100
200
128 160 192
Latency(ms)
No. of GPUs
Reduce – 64 MB
0
100
200
300
16 32 64
Latency(ms)
No. of GPUs
Allreduce - 128 MB
0
50
100
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Bcast – 64 GPUs
0
50
100
16 32 64
Latency(ms)
No. of GPUs
Bcast 128 MB
• MV2-GDR provides
optimized collectives for
large message sizes
• Optimized Reduce,
Allreduce, and Bcast
• Good scaling with large
number of GPUs
• Available since MVAPICH2-
GDR 2.2GA
0
100
200
300
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Allreduce – 64 GPUs
HPCAC-Stanford (Feb ’18) 55Network Based Computing Laboratory
0
50000
100000
150000
200000
250000
8388608
16777216
33554432
67108864
134217728
268435456
536870912
Latency(us)
Message Size (Bytes)
MV2-GDR-Tuned
Baidu-Allreduce
1
10
100
1000
10000
4
16
64
256
1024
4096
16384
65536
262144
Latency(us)-logscale
Message Size (Bytes)
MV2-GDR-Tuned Baidu-Allreduce
• Initial Evaluation shows promising performance gains for MVAPICH2-GDR 2.3a*
• 8 GPUs (4 nodes) MPI_Allreduce (MVAPICH2-GDR) vs. Baidu-Allreduce
MVAPICH2-GDR vs. Baidu-allreduce
*Available with MVAPICH2-GDR 2.3a
~100X better
~20% better
0
1000
2000
3000
4000
5000
6000
Latency(us)-logscale
Message Size (Bytes)
MV2-GDR-Tuned Baidu-Allreduce
~10X better
HPCAC-Stanford (Feb ’18) 56Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning framework.
• Benefits and Weaknesses
– Multi-GPU Training within a single node
– Performance degradation for GPUs across different
sockets
– Limited Scale-out
• OSU-Caffe: MPI-based Parallel Training
– Enable Scale-up (within a node) and Scale-out (across
multi-GPU nodes)
– Scale-out on 64 GPUs for training CIFAR-10 network on
CIFAR-10 dataset
– Scale-out on 128 GPUs for training GoogLeNet network on
ImageNet dataset
OSU-Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
TrainingTime(seconds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU-Caffe (1024) OSU-Caffe (2048)
Invalid use case
OSU-Caffe publicly available from
http://hidl.cse.ohio-state.edu/
HPCAC-Stanford (Feb ’18) 57Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC
– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
HPCAC-Stanford (Feb ’18) 58Network Based Computing Laboratory
• Virtualization has many benefits
– Fault-tolerance
– Job migration
– Compaction
• Have not been very popular in HPC due to overhead associated with
Virtualization
• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox
InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR-IOV
• MVAPICH2-Virt 2.2 supports:
– OpenStack, Docker, and singularity
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based
Virtualized InfiniBand Clusters? EuroPar'14
J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand
Clusters, HiPC’14
J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build
HPC Clouds, CCGrid’15
HPCAC-Stanford (Feb ’18) 59Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
ExecutionTime(s)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
1%
9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
ExecutionTime(ms)
Problem Size (Scale, Edgefactor)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2-5% overhead for Graph500 with 128 Procs
• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs
Application-Level Performance on Chameleon
SPEC MPI2007Graph500
5%
HPCAC-Stanford (Feb ’18) 60Network Based Computing Laboratory
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
ExecutionTime(s)
Container-Def
Container-Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container-Def, up to 11% and 73% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 5% overhead for NAS and Graph 500
Application-Level Performance on Docker with MVAPICH2
Graph 500
NAS
11%
0
50
100
150
200
250
300
1Cont*16P 2Conts*8P 4Conts*4P
BFSExecutionTime(ms)
Scale, Edgefactor (20,16)
Container-Def
Container-Opt
Native
73%
HPCAC-Stanford (Feb ’18) 61Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
22,16 22,20 24,16 24,20 26,16 26,20
BFSExecutionTime(ms)
Problem Size (Scale, Edgefactor)
Graph500
Singularity
Native
0
50
100
150
200
250
300
CG EP FT IS LU MG
ExecutionTime(s)
NPB Class D
Singularity
Native
• 512 Processes across 32 nodes
• Less than 7% and 6% overhead for NPB and Graph500, respectively
Application-Level Performance on Singularity with MVAPICH2
7%
6%
J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?,
UCC ’17, Best Student Paper Award
HPCAC-Stanford (Feb ’18) 62Network Based Computing Laboratory
High Performance SR-IOV enabled VM Migration Framework
J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters.
IPDPS, 2017
• Consist of SR-IOV enabled IB Cluster and External
Migration Controller
• Detachment/Re-attachment of virtualized
devices: Multiple parallel libraries to coordinate
VM during migration (detach/reattach SR-
IOV/IVShmem, migrate VMs, migration status)
• IB Connection: MPI runtime handles IB
connection suspending and reactivating
• Propose Progress Engine (PE) and Migration
Thread based (MT) design to optimize VM
migration and MPI application performance
HPCAC-Stanford (Feb ’18) 63Network Based Computing Laboratory
Bcast (4VMs * 2Procs/VM)
• Migrate a VM from one machine to another while benchmark is running inside
• Proposed MT-based designs perform slightly worse than PE-based designs because of lock/unlock
• No benefit from MT because of NO computation involved
Performance Evaluation of VM Migration Framework
Pt2Pt Latency
0
20
40
60
80
100
120
140
160
180
200
1 4 16 64 256 1K 4K 16K 64K 256K 1M
Latency(us)
Message Size ( bytes)
PE-IPoIB
PE-RDMA
MT-IPoIB
MT-RDMA
0
100
200
300
400
500
600
700
1 4 16 64 256 1K 4K 16K 64K 256K 1M
2Latency(us)
Message Size ( bytes)
PE-IPoIB
PE-RDMA
MT-IPoIB
MT-RDMA
Will be available in upcoming MVAPICH2-Virt
HPCAC-Stanford (Feb ’18) 64Network Based Computing Laboratory
• Next generation HPC systems need to be designed with a holistic view of Deep
Learning and Cloud
• Presented some of the approaches and results along these directions
• Enable Deep Learning and Cloud community to take advantage of modern HPC
technologies
• Many other open issues need to be solved
Concluding Remarks
HPCAC-Stanford (Feb ’18) 65Network Based Computing Laboratory
Funding Acknowledgments
Funding Support by
Equipment Support by
HPCAC-Stanford (Feb ’18) 66Network Based Computing Laboratory
Personnel Acknowledgments
Current Students (Graduate)
– A. Awan (Ph.D.)
– R. Biswas (M.S.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.-H. Chu (Ph.D.)
– S. Guganani (Ph.D.)
Past Students
– A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist
– K. Hamidouche
– S. Sur
Past Post-Docs
– D. Banerjee
– X. Besseron
– H.-W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– M. Li (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– J. Hashmi (Ph.D.)
– H. Javed (Ph.D.)
– P. Kousha (Ph.D.)
– D. Shankar (Ph.D.)
– H. Shi (Ph.D.)
– J. Zhang (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists
– X. Lu
– H. Subramoni
Past Programmers
– D. Bureddy
– J. Perkins
Current Research Specialist
– J. Smith
– M. Arnold
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post-doc
– A. Ruhela
Current Students (Undergraduate)
– N. Sarkauskas (B.S.)
HPCAC-Stanford (Feb ’18) 67Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
panda@cse.ohio-state.edu
The High-Performance MPI/PGAS Project
http://mvapich.cse.ohio-state.edu/
The High-Performance Deep Learning Project
http://hidl.cse.ohio-state.edu/
The High-Performance Big Data Project
http://hibd.cse.ohio-state.edu/

Contenu connexe

Tendances

TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformGanesan Narayanasamy
 
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep LearningApache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep LearningDataWorks Summit
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...inside-BigData.com
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...inside-BigData.com
 
Dealing with an Upside Down Internet
Dealing with an Upside Down InternetDealing with an Upside Down Internet
Dealing with an Upside Down InternetMapR Technologies
 
Elastify Cloud-Native Spark Application with Persistent Memory
Elastify Cloud-Native Spark Application with Persistent MemoryElastify Cloud-Native Spark Application with Persistent Memory
Elastify Cloud-Native Spark Application with Persistent MemoryDatabricks
 
How Researchers Will Benefit from Canada’s National Data Cyberinfrastructure
How Researchers Will Benefit from Canada’s National Data CyberinfrastructureHow Researchers Will Benefit from Canada’s National Data Cyberinfrastructure
How Researchers Will Benefit from Canada’s National Data Cyberinfrastructureinside-BigData.com
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowDataWorks Summit
 
Deep Learning with Spark and GPUs
Deep Learning with Spark and GPUsDeep Learning with Spark and GPUs
Deep Learning with Spark and GPUsDataWorks Summit
 
Hug france-2012-12-04
Hug france-2012-12-04Hug france-2012-12-04
Hug france-2012-12-04Ted Dunning
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
 
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...inside-BigData.com
 
Trends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient PerformanceTrends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient Performanceinside-BigData.com
 
Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...
Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...
Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...Aditya Yadav
 
MapR LucidWorks Joint Webinar 121211
MapR LucidWorks Joint Webinar 121211MapR LucidWorks Joint Webinar 121211
MapR LucidWorks Joint Webinar 121211MapR Technologies
 
Bringing Real-Time to the Enterprise with Hortonworks DataFlow
Bringing Real-Time to the Enterprise with Hortonworks DataFlowBringing Real-Time to the Enterprise with Hortonworks DataFlow
Bringing Real-Time to the Enterprise with Hortonworks DataFlowDataWorks Summit
 
Scalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC SystemsScalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC Systemsinside-BigData.com
 
Realistic Synthetic Generation Allows Secure Development
Realistic Synthetic Generation Allows Secure DevelopmentRealistic Synthetic Generation Allows Secure Development
Realistic Synthetic Generation Allows Secure DevelopmentDataWorks Summit
 
Nvidia SC16: The Greatest Challenges Can't Wait
Nvidia SC16: The Greatest Challenges Can't WaitNvidia SC16: The Greatest Challenges Can't Wait
Nvidia SC16: The Greatest Challenges Can't Waitinside-BigData.com
 
Introduction to High-Performance Computing (HPC) Containers and Singularity*
Introduction to High-Performance Computing (HPC) Containers and Singularity*Introduction to High-Performance Computing (HPC) Containers and Singularity*
Introduction to High-Performance Computing (HPC) Containers and Singularity*Intel® Software
 

Tendances (20)

TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platform
 
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep LearningApache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
 
Dealing with an Upside Down Internet
Dealing with an Upside Down InternetDealing with an Upside Down Internet
Dealing with an Upside Down Internet
 
Elastify Cloud-Native Spark Application with Persistent Memory
Elastify Cloud-Native Spark Application with Persistent MemoryElastify Cloud-Native Spark Application with Persistent Memory
Elastify Cloud-Native Spark Application with Persistent Memory
 
How Researchers Will Benefit from Canada’s National Data Cyberinfrastructure
How Researchers Will Benefit from Canada’s National Data CyberinfrastructureHow Researchers Will Benefit from Canada’s National Data Cyberinfrastructure
How Researchers Will Benefit from Canada’s National Data Cyberinfrastructure
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
 
Deep Learning with Spark and GPUs
Deep Learning with Spark and GPUsDeep Learning with Spark and GPUs
Deep Learning with Spark and GPUs
 
Hug france-2012-12-04
Hug france-2012-12-04Hug france-2012-12-04
Hug france-2012-12-04
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
 
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
 
Trends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient PerformanceTrends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient Performance
 
Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...
Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...
Automatski - RSA-2048 Cryptography Cracked using Shor's Algorithm on a Quantu...
 
MapR LucidWorks Joint Webinar 121211
MapR LucidWorks Joint Webinar 121211MapR LucidWorks Joint Webinar 121211
MapR LucidWorks Joint Webinar 121211
 
Bringing Real-Time to the Enterprise with Hortonworks DataFlow
Bringing Real-Time to the Enterprise with Hortonworks DataFlowBringing Real-Time to the Enterprise with Hortonworks DataFlow
Bringing Real-Time to the Enterprise with Hortonworks DataFlow
 
Scalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC SystemsScalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC Systems
 
Realistic Synthetic Generation Allows Secure Development
Realistic Synthetic Generation Allows Secure DevelopmentRealistic Synthetic Generation Allows Secure Development
Realistic Synthetic Generation Allows Secure Development
 
Nvidia SC16: The Greatest Challenges Can't Wait
Nvidia SC16: The Greatest Challenges Can't WaitNvidia SC16: The Greatest Challenges Can't Wait
Nvidia SC16: The Greatest Challenges Can't Wait
 
Introduction to High-Performance Computing (HPC) Containers and Singularity*
Introduction to High-Performance Computing (HPC) Containers and Singularity*Introduction to High-Performance Computing (HPC) Containers and Singularity*
Introduction to High-Performance Computing (HPC) Containers and Singularity*
 

Similaire à Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systemsinside-BigData.com
 
High-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale SystemsHigh-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale Systemsinside-BigData.com
 
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...inside-BigData.com
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Datainside-BigData.com
 
Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418inside-BigData.com
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systemsinside-BigData.com
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...inside-BigData.com
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
 
A Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersA Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersIntel® Software
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plansinside-BigData.com
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsGanesan Narayanasamy
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...DataWorks Summit/Hadoop Summit
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceObject Automation
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondUCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondEd Dodds
 
Ucx an open source framework for hpc network ap is and beyond
Ucx  an open source framework for hpc network ap is and beyondUcx  an open source framework for hpc network ap is and beyond
Ucx an open source framework for hpc network ap is and beyondinside-BigData.com
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCObject Automation
 
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...inside-BigData.com
 
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...BigDataEverywhere
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architectureinside-BigData.com
 

Similaire à Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems (20)

Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systems
 
High-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale SystemsHigh-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale Systems
 
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Data
 
Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
A Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersA Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing Clusters
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systems
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondUCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and Beyond
 
Ucx an open source framework for hpc network ap is and beyond
Ucx  an open source framework for hpc network ap is and beyondUcx  an open source framework for hpc network ap is and beyond
Ucx an open source framework for hpc network ap is and beyond
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPC
 
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
 
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architecture
 

Plus de inside-BigData.com

Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networksinside-BigData.com
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networksinside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecastsinside-BigData.com
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Updateinside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuninginside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODinside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Erainside-BigData.com
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Clusterinside-BigData.com
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...inside-BigData.com
 

Plus de inside-BigData.com (20)

Major Market Shifts in IT
Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
 

Dernier

Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 

Dernier (20)

Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

  • 1. Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Keynote Talk at HPCAC Stanford Conference (Feb ‘18) by
  • 2. HPCAC-Stanford (Feb ’18) 2Network Based Computing Laboratory High-End Computing (HEC): Towards Exascale Expected to have an ExaFlop system in 2019-2020! 100 PFlops in 2016 1 EFlops in 2019- 2020?
  • 3. HPCAC-Stanford (Feb ’18) 3Network Based Computing Laboratory Big Data (Hadoop, Spark, HBase, Memcached, etc.) Deep Learning (Caffe, TensorFlow, BigDL, etc.) HPC (MPI, RDMA, Lustre, etc.) Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run these applications on the Cloud!!
  • 4. HPCAC-Stanford (Feb ’18) 4Network Based Computing Laboratory • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Support for PGAS and MPI + PGAS (OpenSHMEM, UPC) – Exploiting Accelerators • Deep Learning – Caffe and CNTK • Cloud for HPC – Virtualization with SR-IOV and Containers Focus of Today’s Talk: HPC, Deep Learning, and Cloud Focus of Tomorrow’s Talk: HPC + Big Data, Deep Learning, and Cloud
  • 5. HPCAC-Stanford (Feb ’18) 5Network Based Computing Laboratory • Big Data/Enterprise/Commercial Computing – Spark and Hadoop (HDFS, HBase, MapReduce) – Memcached for Web 2.0 – Kafka for Stream Processing – Swift • Deep Learning – TensorFlow with gRPC • Cloud for Big Data Stacks – Virtualization with SR-IOV and Containers Focus of Tomorrow’s Talk: HPC, BigData, Deep Learning and Cloud
  • 6. HPCAC-Stanford (Feb ’18) 6Network Based Computing Laboratory Parallel Programming Models Overview P1 P2 P3 Shared Memory P1 P2 P3 Memory Memory Memory P1 P2 P3 Memory Memory Memory Logical shared memory Shared Memory Model SHMEM, DSM Distributed Memory Model MPI (Message Passing Interface) Partitioned Global Address Space (PGAS) Global Arrays, UPC, Chapel, X10, CAF, … • Programming models provide abstract machine models • Models can be mapped on different types of systems – e.g. Distributed Shared Memory (DSM), MPI within a node, etc. • PGAS models and Hybrid MPI+PGAS models are gradually receiving importance
  • 7. HPCAC-Stanford (Feb ’18) 7Network Based Computing Laboratory Partitioned Global Address Space (PGAS) Models • Key features - Simple shared memory abstractions - Light weight one-sided communication - Easier to express irregular communication • Different approaches to PGAS - Languages • Unified Parallel C (UPC) • Co-Array Fortran (CAF) • X10 • Chapel - Libraries • OpenSHMEM • UPC++ • Global Arrays
  • 8. HPCAC-Stanford (Feb ’18) 8Network Based Computing Laboratory Hybrid (MPI+PGAS) Programming • Application sub-kernels can be re-written in MPI/PGAS based on communication characteristics • Benefits: – Best of Distributed Computing Model – Best of Shared Memory Computing Model Kernel 1 MPI Kernel 2 MPI Kernel 3 MPI Kernel N MPI HPC Application Kernel 2 PGAS Kernel N PGAS
  • 9. HPCAC-Stanford (Feb ’18) 9Network Based Computing Laboratory Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Application Kernels/Applications Networking Technologies (InfiniBand, 40/100GigE, Aries, and Omni-Path) Multi-/Many-core Architectures Accelerators (GPU and FPGA) Middleware Co-Design Opportunities and Challenges across Various Layers Performance Scalability Resilience Communication Library or Runtime for Programming Models Point-to-point Communication Collective Communication Energy- Awareness Synchronization and Locks I/O and File Systems Fault Tolerance
  • 10. HPCAC-Stanford (Feb ’18) 10Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for GPGPUs and Accelerators • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale
  • 11. HPCAC-Stanford (Feb ’18) 11Network Based Computing Laboratory • Extreme Low Memory Footprint – Memory per core continues to decrease • D-L-A Framework – Discover • Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job • Node architecture, Health of network and node – Learn • Impact on performance and scalability • Potential for failure – Adapt • Internal protocols and algorithms • Process mapping • Fault-tolerance solutions – Low overhead techniques while delivering performance, scalability and fault-tolerance Additional Challenges for Designing Exascale Software Libraries
  • 12. HPCAC-Stanford (Feb ’18) 12Network Based Computing Laboratory Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 2,875 organizations in 85 countries – More than 445,000 (> 0.4 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘17 ranking) • 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 12th, 368,928-core (Stampede2) at TACC • 17th, 241,108-core (Pleiades) at NASA • 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology – Available with software stacks of many vendors and Linux Distros (RedHat and SuSE) – http://mvapich.cse.ohio-state.edu • Empowering Top500 systems for over a decade – System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) -> – Sunway TaihuLight (1st in Jun’17, 10M cores, 100 PFlops)
  • 13. HPCAC-Stanford (Feb ’18) 13Network Based Computing Laboratory 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000 Sep-04 Feb-05 Jul-05 Dec-05 May-06 Oct-06 Mar-07 Aug-07 Jan-08 Jun-08 Nov-08 Apr-09 Sep-09 Feb-10 Jul-10 Dec-10 May-11 Oct-11 Mar-12 Aug-12 Jan-13 Jun-13 Nov-13 Apr-14 Sep-14 Feb-15 Jul-15 Dec-15 May-16 Oct-16 Mar-17 Aug-17 Jan-18 NumberofDownloads Timeline MV0.9.4 MV20.9.0 MV20.9.8 MV21.0 MV1.0 MV21.0.3 MV1.1 MV21.4 MV21.5 MV21.6 MV21.7 MV21.8 MV21.9 MV22.1 MV2-GDR2.0b MV2-MIC2.0 MV2-GDR2.3a MV2-X2.3b MV2Virt2.2 MV22.3rc1 MVAPICH2 Release Timeline and Downloads
  • 14. HPCAC-Stanford (Feb ’18) 14Network Based Computing Laboratory Architecture of MVAPICH2 Software Family High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- point Primitives Collectives Algorithms Energy- Awareness Remote Memory Access I/O and File Systems Fault Tolerance Virtualization Active Messages Job Startup Introspection & Analysis Support for Modern Networking Technology (InfiniBand, iWARP, RoCE, Omni-Path) Support for Modern Multi-/Many-core Architectures (Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features RC XRC UD DC UMR ODP SR- IOV Multi Rail Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features MCDRAM* NVLink* CAPI* * Upcoming XPMEM*
  • 15. HPCAC-Stanford (Feb ’18) 15Network Based Computing Laboratory • Released on 02/19/2018 • Major Features and Enhancements – Enhanced performance for Allreduce, Reduce_scatter_block, Allgather, Allgatherv through new algorithms – Enhance support for MPI_T PVARs and CVARs – Improved job startup time for OFA-IB-CH3, PSM-CH3, and PSM2-CH3 – Support to automatically detect IP address of IB/RoCE interfaces when RDMA_CM is enabled without relying on mv2.conf file – Enhance HCA detection to handle cases where node has both IB and RoCE HCAs – Automatically detect and use maximum supported MTU by the HCA – Added logic to detect heterogeneous CPU/HFI configurations in PSM-CH3 and PSM2-CH3 channels – Enhanced intra-node and inter-node tuning for PSM-CH3 and PSM2-CH3 channels – Enhanced HFI selection logic for systems with multiple Omni-Path HFIs – Enhanced tuning and architecture detection for OpenPOWER, Intel Skylake and Cavium ARM (ThunderX) systems – Added 'SPREAD', 'BUNCH', and 'SCATTER' binding options for hybrid CPU binding policy – Rename MV2_THREADS_BINDING_POLICY to MV2_HYBRID_BINDING_POLICY – Added support for MV2_SHOW_CPU_BINDING to display number of OMP threads – Update to hwloc version 1.11.9 MVAPICH2 2.3rc1
  • 16. HPCAC-Stanford (Feb ’18) 16Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient communication – Scalable Start-up – Optimized Collectives using SHArP and Multi-Leaders – Optimized CMA-based Collectives – Optimized XPMEM-based Collectives – Dynamic and Adaptive Communication and Tag Matching – Performance Engineering with MPI-T Support – Integrated Network Analysis and Monitoring • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • Optimized MVAPICH2 for OpenPower and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 17. HPCAC-Stanford (Feb ’18) 17Network Based Computing Laboratory One-way Latency: MPI over IB with MVAPICH2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Small Message Latency Message Size (bytes) Latency(us) 1.11 1.19 0.98 1.15 1.04 TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch 0 20 40 60 80 100 120 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-5-EDR Omni-Path Large Message Latency Message Size (bytes) Latency(us)
  • 18. HPCAC-Stanford (Feb ’18) 18Network Based Computing Laboratory TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch Bandwidth: MPI over IB with MVAPICH2 0 5000 10000 15000 20000 25000 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-5-EDR Omni-Path Bidirectional Bandwidth Bandwidth (MBytes/sec) Message Size (bytes) 22,564 12,161 21,983 6,228 24,136 0 2000 4000 6000 8000 10000 12000 14000 Unidirectional Bandwidth Bandwidth (MBytes/sec) Message Size (bytes) 12,590 3,373 6,356 12,358 12,366
  • 19. HPCAC-Stanford (Feb ’18) 19Network Based Computing Laboratory Startup Performance on KNL + Omni-Path 0 5 10 15 20 25 64 128 256 512 1K 2K 4K 8K 16K 32K 64K TimeTaken(Seconds) Number of Processes MPI_Init & Hello World - Oakforest-PACS Hello World (MVAPICH2-2.3a) MPI_Init (MVAPICH2-2.3a) • MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale) • 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC) • At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS) • All numbers reported with 64 processes per node 5.8s 21s 22s New designs available since MVAPICH2-2.3a and as patch for SLURM 15, 16, and 17
  • 20. HPCAC-Stanford (Feb ’18) 20Network Based Computing Laboratory 0 0.05 0.1 0.15 0.2 (4,28) (8,28) (16,28) Latency(seconds) (Number of Nodes, PPN) MVAPICH2 MVAPICH2-SHArP 13% Mesh Refinement Time of MiniAMR Advanced Allreduce Collective Designs Using SHArP 12% 0 0.1 0.2 0.3 0.4 (4,28) (8,28) (16,28) Latency(seconds) (Number of Nodes, PPN) MVAPICH2 MVAPICH2-SHArP Avg DDOT Allreduce time of HPCG SHArP Support is available since MVAPICH2 2.3a M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi- Leader Design, SuperComputing '17.
  • 21. HPCAC-Stanford (Feb ’18) 21Network Based Computing Laboratory Performance of MPI_Allreduce On Stampede2 (10,240 Processes) 0 50 100 150 200 250 300 4 8 16 32 64 128 256 512 1024 2048 4096 Latency(us) Message Size MVAPICH2 MVAPICH2-OPT IMPI 0 200 400 600 800 1000 1200 1400 1600 1800 2000 8K 16K 32K 64K 128K 256K Message Size MVAPICH2 MVAPICH2-OPT IMPI OSU Micro Benchmark 64 PPN 2.4X • For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4X M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17. Available in MVAPICH2-X 2.3b
  • 22. HPCAC-Stanford (Feb ’18) 22Network Based Computing Laboratory Optimized CMA-based Collectives for Large Messages 1 10 100 1000 10000 100000 1000000 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size KNL (2 Nodes, 128 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 Tuned CMA Latency(us) 1 10 100 1000 10000 100000 1000000 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Message Size KNL (4 Nodes, 256 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 Tuned CMA 1 10 100 1000 10000 100000 1000000 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Message Size KNL (8 Nodes, 512 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 Tuned CMA • Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower) • New two-level algorithms for better scalability • Improved performance for other collectives (Bcast, Allgather, and Alltoall) ~ 2.5x Better ~ 3.2x Better ~ 4x Better ~ 17x Better S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist Performance of MPI_Gather on KNL nodes (64PPN) Available in MVAPICH2-X 2.3b
  • 23. HPCAC-Stanford (Feb ’18) 23Network Based Computing Laboratory Shared Address Space (XPMEM)-based Collectives Design 1 10 100 1000 10000 100000 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size MVAPICH2-2.3b IMPI-2017v1.132 MVAPICH2-Opt OSU_Allreduce (Broadwell 256 procs) • “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2 • Offloaded computation/communication to peers ranks in reduction collective operation • Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce 73.2 1.8X 1 10 100 1000 10000 100000 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size MVAPICH2-2.3b IMPI-2017v1.132 MVAPICH2-Opt OSU_Reduce (Broadwell 256 procs) 4X 36.1 37.9 16.8 J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018. Will be available in future
  • 24. HPCAC-Stanford (Feb ’18) 24Network Based Computing Laboratory Application-Level Benefits of XPMEM-Based Collectives MiniAMR (Broadwell, ppn=16) • Up to 20% benefits over IMPI for CNTK DNN training using AllReduce • Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for MiniAMR application kernel 0 200 400 600 800 28 56 112 224 ExecutionTime(s) No. of Processes IMPI-2017v1.132 MVAPICH2-2.3b MVAPICH2-Opt CNTK AlexNet Training (Broadwell, B.S=default, iteration=50, ppn=28) 0 20 40 60 80 16 32 64 128 256 ExecutionTime(s) No. of Processes IMPI-2017v1.132 MVAPICH2-2.3b MVAPICH2-Opt20% 9% 27% 15%
  • 25. HPCAC-Stanford (Feb ’18) 25Network Based Computing Laboratory Dynamic and Adaptive MPI Point-to-point Communication Protocols Process on Node 1 Process on Node 2 Eager Threshold for Example Communication Pattern with Different Designs 0 1 2 3 4 5 6 7 Default 16 KB 16 KB 16 KB 16 KB 0 1 2 3 4 5 6 7 Manually Tuned 128 KB 128 KB 128 KB 128 KB 0 1 2 3 4 5 6 7 Dynamic + Adaptive 32 KB 64 KB 128 KB 32 KB H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper 0 200 400 600 128 256 512 1K WallClockTime(seconds) Number of Processes Execution Time of Amber Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold 0 5 10 128 256 512 1K RelativeMemory Consumption Number of Processes Relative Memory Consumption of Amber Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold Default Poor overlap; Low memory requirement Low Performance; High Productivity Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity Process Pair Eager Threshold (KB) 0 – 4 32 1 – 5 64 2 – 6 128 3 – 7 32 Desired Eager Threshold Will be available in future
  • 26. HPCAC-Stanford (Feb ’18) 26Network Based Computing Laboratory Dynamic and Adaptive Tag Matching Normalized Total Tag Matching Time at 512 Processes Normalized to Default (Lower is Better) Normalized Memory Overhead per Process at 512 Processes Compared to Default (Lower is Better) Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee] Challenge Tag matching is a significant overhead for receivers Existing Solutions are - Static and do not adapt dynamically to communication pattern - Do not consider memory overhead Solution A new tag matching design - Dynamically adapt to communication patterns - Use different strategies for different ranks - Decisions are based on the number of request object that must be traversed before hitting on the required one Results Better performance than other state-of-the art tag- matching schemes Minimum memory consumption Will be available in future MVAPICH2 releases
  • 27. HPCAC-Stanford (Feb ’18) 27Network Based Computing Laboratory ● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables ● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU ● Control the runtime’s behavior via MPI Control Variables (CVARs) ● Introduced support for new MPI_T based CVARs to MVAPICH2 ○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE ● TAU enhanced with support for setting MPI_T CVARs in a non- interactive mode for uninstrumented applications Performance Engineering Applications using MVAPICH2 and TAU VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU, Euro MPI, 2017 [Best Paper Award]
  • 28. HPCAC-Stanford (Feb ’18) 28Network Based Computing Laboratory Overview of OSU INAM • A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI runtime – http://mvapich.cse.ohio-state.edu/tools/osu-inam/ • Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes • OSU INAM v0.9.2 released on 10/31/2017 • Significant enhancements to user interface to enable scaling to clusters with thousands of nodes • Improve database insert times by using 'bulk inserts‘ • Capability to look up list of nodes communicating through a network link • Capability to classify data flowing over a network link at job level and process level granularity in conjunction with MVAPICH2-X 2.3b • “Best practices “ guidelines for deploying OSU INAM on different clusters • Capability to analyze and profile node-level, job-level and process-level activities for MPI communication – Point-to-Point, Collectives and RMA • Ability to filter data based on type of counters using “drop down” list • Remotely monitor various metrics of MPI processes at user specified granularity • "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X • Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
  • 29. HPCAC-Stanford (Feb ’18) 29Network Based Computing Laboratory OSU INAM Features • Show network topology of large clusters • Visualize traffic pattern on different links • Quickly identify congested links/links in error state • See the history unfold – play back historical state of the network Comet@SDSC --- Clustered View (1,879 nodes, 212 switches, 4,377 network links) Finding Routes Between Nodes
  • 30. HPCAC-Stanford (Feb ’18) 30Network Based Computing Laboratory OSU INAM Features (Cont.) Visualizing a Job (5 Nodes) • Job level view • Show different network metrics (load, error, etc.) for any live job • Play back historical data for completed jobs to identify bottlenecks • Node level view - details per process or per node • CPU utilization for each rank/node • Bytes sent/received for MPI operations (pt-to-pt, collective, RMA) • Network metrics (e.g. XmitDiscard, RcvError) per rank/node Estimated Process Level Link Utilization • Estimated Link Utilization view • Classify data flowing over a network link at different granularity in conjunction with MVAPICH2-X 2.2rc1 • Job level and • Process level
  • 31. HPCAC-Stanford (Feb ’18) 31Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • Optimized MVAPICH2 for OpenPower and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 32. HPCAC-Stanford (Feb ’18) 32Network Based Computing Laboratory MVAPICH2-X for Hybrid MPI + PGAS Applications • Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI – Possible deadlock if both runtimes are not progressed – Consumes more network resource • Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF – Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu
  • 33. HPCAC-Stanford (Feb ’18) 33Network Based Computing Laboratory Application Level Performance with Graph500 and Sort Graph500 Execution Time J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013 J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012 0 5 10 15 20 25 30 35 4K 8K 16K Time(s) No. of Processes MPI-Simple MPI-CSC MPI-CSR Hybrid (MPI+OpenSHMEM) 13X 7.6X • Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design • 8,192 processes - 2.4X improvement over MPI-CSR - 7.6X improvement over MPI-Simple • 16,384 processes - 1.5X improvement over MPI-CSR - 13X improvement over MPI-Simple J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013. Sort Execution Time 0 500 1000 1500 2000 2500 3000 500GB-512 1TB-1K 2TB-2K 4TB-4K Time(seconds) Input Data - No. of Processes MPI Hybrid 51% • Performance of Hybrid (MPI+OpenSHMEM) Sort Application • 4,096 processes, 4 TB Input Size - MPI – 2408 sec; 0.16 TB/min - Hybrid – 1172 sec; 0.36 TB/min - 51% improvement over MPI-design
  • 34. HPCAC-Stanford (Feb ’18) 34Network Based Computing Laboratory Optimized OpenSHMEM with AVX and MCDRAM: Application Kernels Evaluation Heat Image Kernel • On heat diffusion based kernels AVX-512 vectorization showed better performance • MCDRAM showed significant benefits on Heat-Image kernel for all process counts. Combined with AVX-512 vectorization, it showed up to 4X improved performance 1 10 100 1000 16 32 64 128 Time(s) No. of processes KNL (Default) KNL (AVX-512) KNL (AVX-512+MCDRAM) Broadwell Heat-2D Kernel using Jacobi method 0.1 1 10 100 16 32 64 128 Time(s) No. of processes KNL (Default) KNL (AVX-512) KNL (AVX-512+MCDRAM) Broadwell
  • 35. HPCAC-Stanford (Feb ’18) 35Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs – CUDA-aware MPI – Optimized GPUDirect RDMA (GDR) Support – Support for Streaming Applications • Optimized MVAPICH2 for OpenPower and ARM Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 36. HPCAC-Stanford (Feb ’18) 36Network Based Computing Laboratory At Sender: At Receiver: MPI_Recv(r_devbuf, size, …); inside MVAPICH2 • Standard MPI interfaces used for unified data movement • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0) • Overlaps data movement from GPU with RDMA transfers High Performance and High Productivity MPI_Send(s_devbuf, size, …); GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
  • 37. HPCAC-Stanford (Feb ’18) 37Network Based Computing Laboratory CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases • Support for MPI communication from NVIDIA GPU device memory • High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) • High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU) • Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node • Optimized and tuned collectives for GPU device buffers • MPI datatype support for point-to-point and collective communication from GPU device buffers • Unified memory
  • 38. HPCAC-Stanford (Feb ’18) 38Network Based Computing Laboratory 0 2000 4000 6000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bi-Bandwidth MV2-(NO-GDR) MV2-GDR-2.3a 0 1000 2000 3000 4000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bandwidth MV2-(NO-GDR) MV2-GDR-2.3a 0 10 20 30 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) GPU-GPU Inter-node Latency MV2-(NO-GDR) MV2-GDR-2.3a MVAPICH2-GDR-2.3a Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores NVIDIA Volta V100 GPU Mellanox Connect-X4 EDR HCA CUDA 9.0 Mellanox OFED 4.0 with GPU-Direct-RDMA 10x 9x Optimized MVAPICH2-GDR Design 1.88us 11X
  • 39. HPCAC-Stanford (Feb ’18) 39Network Based Computing Laboratory • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB) • HoomdBlue Version 1.0.5 • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384 Application-Level Evaluation (HOOMD-blue) 0 500 1000 1500 2000 2500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes MV2 MV2+GDR 0 500 1000 1500 2000 2500 3000 3500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes 64K Particles 256K Particles 2X 2X
  • 40. HPCAC-Stanford (Feb ’18) 40Network Based Computing Laboratory Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland 0 0.2 0.4 0.6 0.8 1 1.2 16 32 64 96NormalizedExecutionTime Number of GPUs CSCS GPU cluster Default Callback-based Event-based 0 0.2 0.4 0.6 0.8 1 1.2 4 8 16 32 NormalizedExecutionTime Number of GPUs Wilkes GPU Cluster Default Callback-based Event-based • 2X improvement on 32 GPUs nodes • 30% improvement on 96 GPU nodes (8 GPUs/node) C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16 On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application Cosmo model: http://www2.cosmo-model.org/content /tasks/operational/meteoSwiss/
  • 41. HPCAC-Stanford (Feb ’18) 41Network Based Computing Laboratory • Streaming applications on GPU clusters – Using a pipeline of broadcast operations to move host- resident data from a single source—typically live— to multiple GPU-based computing sites – Existing schemes require explicitly data movements between Host and GPU memories Poor performance and breaking the pipeline • IB hardware multicast + Scatter-List – Efficient heterogeneous-buffer broadcast operation • CUDA Inter-Process Communication (IPC) – Efficient intra-node topology-aware broadcast operations for multi-GPU systems • Available MVAPICH2-GDR 2.3a! High-Performance Heterogeneous Broadcast for Streaming Applications Node N IB HCA IB HCA CPU GPU Source IB Switch GPU CPU Node 1 Multicast steps C Data C IB SL step Data IB HCA GPU CPU Data C Node N Node 1 IB Switch GPU 0 GPU 1 GPU N GPU CPU Source GPU CPU CPU Multicast steps IPC-based cudaMemcpy (Device<->Device) 3 Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters. C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh , B. Elton, and D. K. Panda, SBAC-PAD'16, Oct 2016.
  • 42. HPCAC-Stanford (Feb ’18) 42Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • Optimized MVAPICH2 for OpenPower and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 43. HPCAC-Stanford (Feb ’18) 43Network Based Computing Laboratory 0 1 2 3 4 0 1 2 4 8 16 32 64 256 512 1K 2K 4K Latency(us) MVAPICH2 Inter-node Point-to-Point Performance on OpenPower 0 5000 10000 15000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Bandwidth(MB/s) MVAPICH2 0 10000 20000 30000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M BidirectionalBandwidth MVAPICH2 Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA. 0 50 100 150 200 8K 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) MVAPICH2 Small Message Latency Large Message Latency Bi-directional BandwidthBandwidth
  • 44. HPCAC-Stanford (Feb ’18) 44Network Based Computing Laboratory 0 1 2 3 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Latency(us) MVAPICH2 Intra-node Point-to-point Performance on ARMv8 0 1000 2000 3000 4000 5000 Bandwidth(MB/s) MVAPICH2 0 2000 4000 6000 8000 10000 BidirectionalBandwidth MVAPICH2 Platform: ARMv8 (aarch64) MIPS processor with 96 cores dual-socket CPU. Each socket contains 48 cores. 0 200 400 600 8K 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) MVAPICH2 Small Message Latency Large Message Latency Bi-directional BandwidthBandwidth Available since MVAPICH2 2.3a 0.74 microsec (4 bytes)
  • 45. HPCAC-Stanford (Feb ’18) 45Network Based Computing Laboratory 0 10 20 30 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) INTRA-NODE LATENCY (SMALL) INTRA-SOCKET(NVLINK) INTER-SOCKET MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal) 0 10 20 30 40 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(GB/sec) Message Size (Bytes) INTRA-NODE BANDWIDTH INTRA-SOCKET(NVLINK) INTER-SOCKET 0 5 10 15 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(GB/sec) Message Size (Bytes) INTER-NODE BANDWIDTH Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and EDR InfiniBand Inter-connect 0 200 400 16K 32K 64K 128K256K512K 1M 2M 4M Latency(us) Message Size (Bytes) INTRA-NODE LATENCY (LARGE) INTRA-SOCKET(NVLINK) INTER-SOCKET 20 22 24 26 28 30 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) INTER-NODE LATENCY (SMALL) 0 100 200 300 400 500 Latency(us) Message Size (Bytes) INTER-NODE LATENCY (LARGE) Intra-node Bandwidth: 33.9 GB/sec (NVLINK)Intra-node Latency: 14.6 us (without GPUDirectRDMA) Inter-node Latency: 23.8 us (without GPUDirectRDMA) Inter-node Bandwidth: 11.9 GB/sec (EDR) Available in MVAPICH2-GDR 2.3a
  • 46. HPCAC-Stanford (Feb ’18) 46Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • Optimized MVAPICH2 for OpenPower and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 47. HPCAC-Stanford (Feb ’18) 47Network Based Computing Laboratory Application Scalability on Skylake and KNL MiniFE (1300x1300x1300 ~ 910 GB) Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K 0 50 100 150 2048 4096 8192 ExecutionTime(s) No. of Processes (KNL: 64ppn) MVAPICH2 0 20 40 60 2048 4096 8192 ExecutionTime(s) No. of Processes (Skylake: 48ppn) MVAPICH2 0 500 1000 1500 48 96 192 384 768 No. of Processes (Skylake: 48ppn) MVAPICH2 NEURON (YuEtAl2012) Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b 0 1000 2000 3000 4000 64 128 256 512 1024 2048 4096 No. of Processes (KNL: 64ppn) MVAPICH2 0 500 1000 1500 68 136 272 544 1088 2176 4352 No. of Processes (KNL: 68ppn) MVAPICH2 0 1000 2000 48 96 192 384 768 1536 3072 No. of Processes (Skylake: 48ppn) MVAPICH2 Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2
  • 48. HPCAC-Stanford (Feb ’18) 48Network Based Computing Laboratory 0 50 100 150 200 250 300 350 400 milc leslie3d pop2 lammps wrf2 tera_tf lu ExecutionTimein(S) Intel MPI 18.0.0 MVAPICH2 2.3 rc1 2% 6% 1% 6% Performance of SPEC MPI 2007 Benchmarks (KNL + Omni-Path) Mvapich2 outperforms Intel MPI by up to 10% 448 processes on 7 KNL nodes of TACC Stampede2 (64 ppn) 10% 2% 4%
  • 49. HPCAC-Stanford (Feb ’18) 49Network Based Computing Laboratory 0 20 40 60 80 100 120 milc leslie3d pop2 lammps wrf2 GaP tera_tf lu ExecutionTimein(S) Intel MPI 18.0.0 MVAPICH2 2.3 rc1 2% 4% Performance of SPEC MPI 2007 Benchmarks (Skylake + Omni-Path) MVAPICH2 outperforms Intel MPI by up to 38% 480 processes on 10 Skylake nodes of TACC Stampede2 (48 ppn) 0% 1% 0% -4% 38% -3%
  • 50. HPCAC-Stanford (Feb ’18) 50Network Based Computing Laboratory • MPI runtime has many parameters • Tuning a set of parameters can help you to extract higher performance • Compiled a list of such contributions through the MVAPICH Website – http://mvapich.cse.ohio-state.edu/best_practices/ • Initial list of applications – Amber – HoomDBlue – HPCG – Lulesh – MILC – Neuron – SMG2000 • Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu. • We will link these results with credits to you. Compilation of Best Practices
  • 51. HPCAC-Stanford (Feb ’18) 51Network Based Computing Laboratory • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Support for PGAS and MPI + PGAS (OpenSHMEM, UPC) – Exploiting Accelerators • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Cloud for HPC – Virtualization with SR-IOV and Containers HPC, Big Data, Deep Learning, and Cloud
  • 52. HPCAC-Stanford (Feb ’18) 52Network Based Computing Laboratory • Deep Learning frameworks are a different game altogether – Unusually large message sizes (order of megabytes) – Most communication based on GPU buffers • Existing State-of-the-art – cuDNN, cuBLAS, NCCL --> scale-up performance – CUDA-Aware MPI --> scale-out performance • For small and medium message sizes only! • Proposed: Can we co-design the MPI runtime (MVAPICH2- GDR) and the DL framework (Caffe) to achieve both? – Efficient Overlap of Computation and Communication – Efficient Large-Message Communication (Reductions) – What application co-designs are needed to exploit communication-runtime co-designs? Deep Learning: New Challenges for MPI Runtimes Scale-upPerformance Scale-out Performance cuDNN NCCL gRPC Hadoop Proposed Co-Designs MPI cuBLAS A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
  • 53. HPCAC-Stanford (Feb ’18) 53Network Based Computing Laboratory • NCCL 1.x had some limitations – Only worked for a single node; no scale-out on multiple nodes – Degradation across IOH (socket) for scale-up (within a node) • We propose optimized MPI_Bcast design that exploits NCCL [1] – Communication of very large GPU buffers – Scale-out on large number of dense multi-GPU nodes Efficient Broadcast: MVAPICH2-GDR and NCCL 1. A. A. Awan, K. Hamidouche, A. Venkatesh, and D. K. Panda, Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning. In Proceedings of the 23rd European MPI Users' Group Meeting (EuroMPI 2016). [Best Paper Nominee] 2. A. A. Awan, C-H. Chu, H. Subramoni, and D. K. Panda. Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters: MPI or NCCL?, arXiv ’17 (https://arxiv.org/abs/1707.09414) 1 32 4 Internode Comm. (Knomial) 1 2 CPU PLX 3 4 PLX Intranode Comm. (NCCL Ring) Ring Direction 0 5 10 15 20 25 30 2 4 8 32 64 128 TrainingTime(seconds) No. of GPUs MV2-GDR-NCCL MV2-GDR-Opt • Hierarchical Communication that efficiently exploits: – CUDA-Aware MPI_Bcast in MV2-GDR – NCCL Broadcast for intra-node transfers • Can pure MPI-level designs be done that achieve similar or better performance than NCCL-based approach? [2] 0 5 10 15 20 25 30 2 4 8 16 32 64 TrainingTime(seconds) No. of GPUs MV2-GDR MV2-GDR-NCCL VGG Training with CNTK
  • 54. HPCAC-Stanford (Feb ’18) 54Network Based Computing Laboratory 0 100 200 300 2 4 8 16 32 64 128 Latency(ms) Message Size (MB) Reduce – 192 GPUs Large Message Optimized Collectives for Deep Learning 0 100 200 128 160 192 Latency(ms) No. of GPUs Reduce – 64 MB 0 100 200 300 16 32 64 Latency(ms) No. of GPUs Allreduce - 128 MB 0 50 100 2 4 8 16 32 64 128 Latency(ms) Message Size (MB) Bcast – 64 GPUs 0 50 100 16 32 64 Latency(ms) No. of GPUs Bcast 128 MB • MV2-GDR provides optimized collectives for large message sizes • Optimized Reduce, Allreduce, and Bcast • Good scaling with large number of GPUs • Available since MVAPICH2- GDR 2.2GA 0 100 200 300 2 4 8 16 32 64 128 Latency(ms) Message Size (MB) Allreduce – 64 GPUs
  • 55. HPCAC-Stanford (Feb ’18) 55Network Based Computing Laboratory 0 50000 100000 150000 200000 250000 8388608 16777216 33554432 67108864 134217728 268435456 536870912 Latency(us) Message Size (Bytes) MV2-GDR-Tuned Baidu-Allreduce 1 10 100 1000 10000 4 16 64 256 1024 4096 16384 65536 262144 Latency(us)-logscale Message Size (Bytes) MV2-GDR-Tuned Baidu-Allreduce • Initial Evaluation shows promising performance gains for MVAPICH2-GDR 2.3a* • 8 GPUs (4 nodes) MPI_Allreduce (MVAPICH2-GDR) vs. Baidu-Allreduce MVAPICH2-GDR vs. Baidu-allreduce *Available with MVAPICH2-GDR 2.3a ~100X better ~20% better 0 1000 2000 3000 4000 5000 6000 Latency(us)-logscale Message Size (Bytes) MV2-GDR-Tuned Baidu-Allreduce ~10X better
  • 56. HPCAC-Stanford (Feb ’18) 56Network Based Computing Laboratory • Caffe : A flexible and layered Deep Learning framework. • Benefits and Weaknesses – Multi-GPU Training within a single node – Performance degradation for GPUs across different sockets – Limited Scale-out • OSU-Caffe: MPI-based Parallel Training – Enable Scale-up (within a node) and Scale-out (across multi-GPU nodes) – Scale-out on 64 GPUs for training CIFAR-10 network on CIFAR-10 dataset – Scale-out on 128 GPUs for training GoogLeNet network on ImageNet dataset OSU-Caffe: Scalable Deep Learning 0 50 100 150 200 250 8 16 32 64 128 TrainingTime(seconds) No. of GPUs GoogLeNet (ImageNet) on 128 GPUs Caffe OSU-Caffe (1024) OSU-Caffe (2048) Invalid use case OSU-Caffe publicly available from http://hidl.cse.ohio-state.edu/
  • 57. HPCAC-Stanford (Feb ’18) 57Network Based Computing Laboratory • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Support for PGAS and MPI + PGAS (OpenSHMEM, UPC) – Exploiting Accelerators • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Cloud for HPC – Virtualization with SR-IOV and Containers HPC, Big Data, Deep Learning, and Cloud
  • 58. HPCAC-Stanford (Feb ’18) 58Network Based Computing Laboratory • Virtualization has many benefits – Fault-tolerance – Job migration – Compaction • Have not been very popular in HPC due to overhead associated with Virtualization • New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field • Enhanced MVAPICH2 support for SR-IOV • MVAPICH2-Virt 2.2 supports: – OpenStack, Docker, and singularity Can HPC and Virtualization be Combined? J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14 J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
  • 59. HPCAC-Stanford (Feb ’18) 59Network Based Computing Laboratory 0 50 100 150 200 250 300 350 400 milc leslie3d pop2 GAPgeofem zeusmp2 lu ExecutionTime(s) MV2-SR-IOV-Def MV2-SR-IOV-Opt MV2-Native 1% 9.5% 0 1000 2000 3000 4000 5000 6000 22,20 24,10 24,16 24,20 26,10 26,16 ExecutionTime(ms) Problem Size (Scale, Edgefactor) MV2-SR-IOV-Def MV2-SR-IOV-Opt MV2-Native 2% • 32 VMs, 6 Core/VM • Compared to Native, 2-5% overhead for Graph500 with 128 Procs • Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs Application-Level Performance on Chameleon SPEC MPI2007Graph500 5%
  • 60. HPCAC-Stanford (Feb ’18) 60Network Based Computing Laboratory 0 10 20 30 40 50 60 70 80 90 100 MG.D FT.D EP.D LU.D CG.D ExecutionTime(s) Container-Def Container-Opt Native • 64 Containers across 16 nodes, pining 4 Cores per Container • Compared to Container-Def, up to 11% and 73% of execution time reduction for NAS and Graph 500 • Compared to Native, less than 9 % and 5% overhead for NAS and Graph 500 Application-Level Performance on Docker with MVAPICH2 Graph 500 NAS 11% 0 50 100 150 200 250 300 1Cont*16P 2Conts*8P 4Conts*4P BFSExecutionTime(ms) Scale, Edgefactor (20,16) Container-Def Container-Opt Native 73%
  • 61. HPCAC-Stanford (Feb ’18) 61Network Based Computing Laboratory 0 500 1000 1500 2000 2500 3000 22,16 22,20 24,16 24,20 26,16 26,20 BFSExecutionTime(ms) Problem Size (Scale, Edgefactor) Graph500 Singularity Native 0 50 100 150 200 250 300 CG EP FT IS LU MG ExecutionTime(s) NPB Class D Singularity Native • 512 Processes across 32 nodes • Less than 7% and 6% overhead for NPB and Graph500, respectively Application-Level Performance on Singularity with MVAPICH2 7% 6% J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?, UCC ’17, Best Student Paper Award
  • 62. HPCAC-Stanford (Feb ’18) 62Network Based Computing Laboratory High Performance SR-IOV enabled VM Migration Framework J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters. IPDPS, 2017 • Consist of SR-IOV enabled IB Cluster and External Migration Controller • Detachment/Re-attachment of virtualized devices: Multiple parallel libraries to coordinate VM during migration (detach/reattach SR- IOV/IVShmem, migrate VMs, migration status) • IB Connection: MPI runtime handles IB connection suspending and reactivating • Propose Progress Engine (PE) and Migration Thread based (MT) design to optimize VM migration and MPI application performance
  • 63. HPCAC-Stanford (Feb ’18) 63Network Based Computing Laboratory Bcast (4VMs * 2Procs/VM) • Migrate a VM from one machine to another while benchmark is running inside • Proposed MT-based designs perform slightly worse than PE-based designs because of lock/unlock • No benefit from MT because of NO computation involved Performance Evaluation of VM Migration Framework Pt2Pt Latency 0 20 40 60 80 100 120 140 160 180 200 1 4 16 64 256 1K 4K 16K 64K 256K 1M Latency(us) Message Size ( bytes) PE-IPoIB PE-RDMA MT-IPoIB MT-RDMA 0 100 200 300 400 500 600 700 1 4 16 64 256 1K 4K 16K 64K 256K 1M 2Latency(us) Message Size ( bytes) PE-IPoIB PE-RDMA MT-IPoIB MT-RDMA Will be available in upcoming MVAPICH2-Virt
  • 64. HPCAC-Stanford (Feb ’18) 64Network Based Computing Laboratory • Next generation HPC systems need to be designed with a holistic view of Deep Learning and Cloud • Presented some of the approaches and results along these directions • Enable Deep Learning and Cloud community to take advantage of modern HPC technologies • Many other open issues need to be solved Concluding Remarks
  • 65. HPCAC-Stanford (Feb ’18) 65Network Based Computing Laboratory Funding Acknowledgments Funding Support by Equipment Support by
  • 66. HPCAC-Stanford (Feb ’18) 66Network Based Computing Laboratory Personnel Acknowledgments Current Students (Graduate) – A. Awan (Ph.D.) – R. Biswas (M.S.) – M. Bayatpour (Ph.D.) – S. Chakraborthy (Ph.D.) – C.-H. Chu (Ph.D.) – S. Guganani (Ph.D.) Past Students – A. Augustine (M.S.) – P. Balaji (Ph.D.) – S. Bhagvat (M.S.) – A. Bhat (M.S.) – D. Buntinas (Ph.D.) – L. Chai (Ph.D.) – B. Chandrasekharan (M.S.) – N. Dandapanthula (M.S.) – V. Dhanraj (M.S.) – T. Gangadharappa (M.S.) – K. Gopalakrishnan (M.S.) – R. Rajachandrasekar (Ph.D.) – G. Santhanaraman (Ph.D.) – A. Singh (Ph.D.) – J. Sridhar (M.S.) – S. Sur (Ph.D.) – H. Subramoni (Ph.D.) – K. Vaidyanathan (Ph.D.) – A. Vishnu (Ph.D.) – J. Wu (Ph.D.) – W. Yu (Ph.D.) Past Research Scientist – K. Hamidouche – S. Sur Past Post-Docs – D. Banerjee – X. Besseron – H.-W. Jin – W. Huang (Ph.D.) – W. Jiang (M.S.) – J. Jose (Ph.D.) – S. Kini (M.S.) – M. Koop (Ph.D.) – K. Kulkarni (M.S.) – R. Kumar (M.S.) – S. Krishnamoorthy (M.S.) – K. Kandalla (Ph.D.) – M. Li (Ph.D.) – P. Lai (M.S.) – J. Liu (Ph.D.) – M. Luo (Ph.D.) – A. Mamidala (Ph.D.) – G. Marsh (M.S.) – V. Meshram (M.S.) – A. Moody (M.S.) – S. Naravula (Ph.D.) – R. Noronha (Ph.D.) – X. Ouyang (Ph.D.) – S. Pai (M.S.) – S. Potluri (Ph.D.) – J. Hashmi (Ph.D.) – H. Javed (Ph.D.) – P. Kousha (Ph.D.) – D. Shankar (Ph.D.) – H. Shi (Ph.D.) – J. Zhang (Ph.D.) – J. Lin – M. Luo – E. Mancini Current Research Scientists – X. Lu – H. Subramoni Past Programmers – D. Bureddy – J. Perkins Current Research Specialist – J. Smith – M. Arnold – S. Marcarelli – J. Vienne – H. Wang Current Post-doc – A. Ruhela Current Students (Undergraduate) – N. Sarkauskas (B.S.)
  • 67. HPCAC-Stanford (Feb ’18) 67Network Based Computing Laboratory Thank You! Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ panda@cse.ohio-state.edu The High-Performance MPI/PGAS Project http://mvapich.cse.ohio-state.edu/ The High-Performance Deep Learning Project http://hidl.cse.ohio-state.edu/ The High-Performance Big Data Project http://hibd.cse.ohio-state.edu/

Notes de l'éditeur

  1. This one can be updated with the latest version of GDR
  2. Available.. now