In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems.
"This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI+X (PGAS-OpenSHMEM/UPC/CAF/UPC++, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness. Features and sample performance numbers from MVAPICH2 libraries will be presented. For the Deep Learning domain, we will focus on popular Deep Learning framewords (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library and RDMA-enabled Big Data stacks. Finally, we will outline the challenges in moving these middleware to the Cloud environments."
Watch the video: https://youtu.be/i2I6XqOAh_I
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
1. Designing HPC, Deep Learning, and Cloud Middleware
for Exascale Systems
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
Keynote Talk at HPCAC Stanford Conference (Feb ‘18)
by
2. HPCAC-Stanford (Feb ’18) 2Network Based Computing Laboratory
High-End Computing (HEC): Towards Exascale
Expected to have an ExaFlop system in 2019-2020!
100 PFlops in
2016
1 EFlops in 2019-
2020?
3. HPCAC-Stanford (Feb ’18) 3Network Based Computing Laboratory
Big Data
(Hadoop, Spark,
HBase,
Memcached,
etc.)
Deep Learning
(Caffe, TensorFlow, BigDL,
etc.)
HPC
(MPI, RDMA,
Lustre, etc.)
Increasing Usage of HPC, Big Data and Deep Learning
Convergence of HPC, Big
Data, and Deep Learning!
Increasing Need to Run these
applications on the Cloud!!
4. HPCAC-Stanford (Feb ’18) 4Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Deep Learning
– Caffe and CNTK
• Cloud for HPC
– Virtualization with SR-IOV and Containers
Focus of Today’s Talk: HPC, Deep Learning, and Cloud
Focus of Tomorrow’s Talk: HPC + Big Data, Deep Learning, and Cloud
5. HPCAC-Stanford (Feb ’18) 5Network Based Computing Laboratory
• Big Data/Enterprise/Commercial Computing
– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached for Web 2.0
– Kafka for Stream Processing
– Swift
• Deep Learning
– TensorFlow with gRPC
• Cloud for Big Data Stacks
– Virtualization with SR-IOV and Containers
Focus of Tomorrow’s Talk: HPC, BigData, Deep Learning and
Cloud
6. HPCAC-Stanford (Feb ’18) 6Network Based Computing Laboratory
Parallel Programming Models Overview
P1 P2 P3
Shared Memory
P1 P2 P3
Memory Memory Memory
P1 P2 P3
Memory Memory Memory
Logical shared memory
Shared Memory Model
SHMEM, DSM
Distributed Memory Model
MPI (Message Passing Interface)
Partitioned Global Address Space (PGAS)
Global Arrays, UPC, Chapel, X10, CAF, …
• Programming models provide abstract machine models
• Models can be mapped on different types of systems
– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.
• PGAS models and Hybrid MPI+PGAS models are gradually receiving
importance
7. HPCAC-Stanford (Feb ’18) 7Network Based Computing Laboratory
Partitioned Global Address Space (PGAS) Models
• Key features
- Simple shared memory abstractions
- Light weight one-sided communication
- Easier to express irregular communication
• Different approaches to PGAS
- Languages
• Unified Parallel C (UPC)
• Co-Array Fortran (CAF)
• X10
• Chapel
- Libraries
• OpenSHMEM
• UPC++
• Global Arrays
8. HPCAC-Stanford (Feb ’18) 8Network Based Computing Laboratory
Hybrid (MPI+PGAS) Programming
• Application sub-kernels can be re-written in MPI/PGAS
based on communication characteristics
• Benefits:
– Best of Distributed Computing Model
– Best of Shared Memory Computing Model
Kernel 1
MPI
Kernel 2
MPI
Kernel 3
MPI
Kernel N
MPI
HPC Application
Kernel 2
PGAS
Kernel N
PGAS
9. HPCAC-Stanford (Feb ’18) 9Network Based Computing Laboratory
Supporting Programming Models for Multi-Petaflop and
Exaflop Systems: Challenges
Programming Models
MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies
(InfiniBand, 40/100GigE,
Aries, and Omni-Path)
Multi-/Many-core
Architectures
Accelerators
(GPU and FPGA)
Middleware
Co-Design
Opportunities
and
Challenges
across Various
Layers
Performance
Scalability
Resilience
Communication Library or Runtime for Programming Models
Point-to-point
Communication
Collective
Communication
Energy-
Awareness
Synchronization
and Locks
I/O and
File Systems
Fault
Tolerance
10. HPCAC-Stanford (Feb ’18) 10Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)
– Scalable job start-up
• Scalable Collective communication
– Offload
– Non-blocking
– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)
– Multiple end-points per node
• Support for efficient multi-threading
• Integrated Support for GPGPUs and Accelerators
• Fault-tolerance/resiliency
• QoS support for communication and I/O
• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)
• Virtualization
• Energy-Awareness
Broad Challenges in Designing Communication Middleware for (MPI+X) at
Exascale
11. HPCAC-Stanford (Feb ’18) 11Network Based Computing Laboratory
• Extreme Low Memory Footprint
– Memory per core continues to decrease
• D-L-A Framework
– Discover
• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job
• Node architecture, Health of network and node
– Learn
• Impact on performance and scalability
• Potential for failure
– Adapt
• Internal protocols and algorithms
• Process mapping
• Fault-tolerance solutions
– Low overhead techniques while delivering performance, scalability and fault-tolerance
Additional Challenges for Designing Exascale Software Libraries
12. HPCAC-Stanford (Feb ’18) 12Network Based Computing Laboratory
Overview of the MVAPICH2 Project
• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,875 organizations in 85 countries
– More than 445,000 (> 0.4 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘17 ranking)
• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 12th, 368,928-core (Stampede2) at TACC
• 17th, 241,108-core (Pleiades) at NASA
• 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->
– Sunway TaihuLight (1st in Jun’17, 10M cores, 100 PFlops)
14. HPCAC-Stanford (Feb ’18) 14Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface
(MPI)
PGAS
(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication Runtime
Diverse APIs and Mechanisms
Point-to-
point
Primitives
Collectives
Algorithms
Energy-
Awareness
Remote
Memory
Access
I/O and
File Systems
Fault
Tolerance
Virtualization
Active
Messages
Job Startup
Introspection
& Analysis
Support for Modern Networking Technology
(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures
(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODP
SR-
IOV
Multi
Rail
Transport Mechanisms
Shared
Memory
CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
XPMEM*
15. HPCAC-Stanford (Feb ’18) 15Network Based Computing Laboratory
• Released on 02/19/2018
• Major Features and Enhancements
– Enhanced performance for Allreduce, Reduce_scatter_block, Allgather, Allgatherv through new algorithms
– Enhance support for MPI_T PVARs and CVARs
– Improved job startup time for OFA-IB-CH3, PSM-CH3, and PSM2-CH3
– Support to automatically detect IP address of IB/RoCE interfaces when RDMA_CM is enabled without relying on mv2.conf file
– Enhance HCA detection to handle cases where node has both IB and RoCE HCAs
– Automatically detect and use maximum supported MTU by the HCA
– Added logic to detect heterogeneous CPU/HFI configurations in PSM-CH3 and PSM2-CH3 channels
– Enhanced intra-node and inter-node tuning for PSM-CH3 and PSM2-CH3 channels
– Enhanced HFI selection logic for systems with multiple Omni-Path HFIs
– Enhanced tuning and architecture detection for OpenPOWER, Intel Skylake and Cavium ARM (ThunderX) systems
– Added 'SPREAD', 'BUNCH', and 'SCATTER' binding options for hybrid CPU binding policy
– Rename MV2_THREADS_BINDING_POLICY to MV2_HYBRID_BINDING_POLICY
– Added support for MV2_SHOW_CPU_BINDING to display number of OMP threads
– Update to hwloc version 1.11.9
MVAPICH2 2.3rc1
16. HPCAC-Stanford (Feb ’18) 16Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient communication
– Scalable Start-up
– Optimized Collectives using SHArP and Multi-Leaders
– Optimized CMA-based Collectives
– Optimized XPMEM-based Collectives
– Dynamic and Adaptive Communication and Tag Matching
– Performance Engineering with MPI-T Support
– Integrated Network Analysis and Monitoring
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
19. HPCAC-Stanford (Feb ’18) 19Network Based Computing Laboratory
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
TimeTaken(Seconds)
Number of Processes
MPI_Init & Hello World - Oakforest-PACS
Hello World (MVAPICH2-2.3a)
MPI_Init (MVAPICH2-2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)
• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)
• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)
• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available since MVAPICH2-2.3a and as patch for SLURM 15, 16, and 17
20. HPCAC-Stanford (Feb ’18) 20Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
13%
Mesh Refinement Time of MiniAMR
Advanced Allreduce Collective Designs Using SHArP
12%
0
0.1
0.2
0.3
0.4
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
Avg DDOT Allreduce time of HPCG
SHArP Support is available
since MVAPICH2 2.3a
M. Bayatpour, S. Chakraborty, H. Subramoni, X.
Lu, and D. K. Panda, Scalable Reduction
Collectives with Data Partitioning-based Multi-
Leader Design, SuperComputing '17.
21. HPCAC-Stanford (Feb ’18) 21Network Based Computing Laboratory
Performance of MPI_Allreduce On Stampede2 (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Latency(us)
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
0
200
400
600
800
1000
1200
1400
1600
1800
2000
8K 16K 32K 64K 128K 256K
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
OSU Micro Benchmark 64 PPN
2.4X
• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4X
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based
Multi-Leader Design, SuperComputing '17.
Available in MVAPICH2-X 2.3b
22. HPCAC-Stanford (Feb ’18) 22Network Based Computing Laboratory
Optimized CMA-based Collectives for Large Messages
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
4M
Message Size
KNL (2 Nodes, 128 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
Latency(us)
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
Message Size
KNL (4 Nodes, 256 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
Message Size
KNL (8 Nodes, 512 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
• Significant improvement over existing implementation for Scatter/Gather with
1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)
• New two-level algorithms for better scalability
• Improved performance for other collectives (Bcast, Allgather, and Alltoall)
~ 2.5x
Better
~ 3.2x
Better
~ 4x
Better
~ 17x
Better
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI
Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist
Performance of MPI_Gather on KNL nodes (64PPN)
Available in MVAPICH2-X 2.3b
23. HPCAC-Stanford (Feb ’18) 23Network Based Computing Laboratory
Shared Address Space (XPMEM)-based Collectives Design
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-Opt
OSU_Allreduce (Broadwell 256 procs)
• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2
• Offloaded computation/communication to peers ranks in reduction collective operation
• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce
73.2
1.8X
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-Opt
OSU_Reduce (Broadwell 256 procs)
4X
36.1
37.9
16.8
J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction
Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.
Will be available in future
24. HPCAC-Stanford (Feb ’18) 24Network Based Computing Laboratory
Application-Level Benefits of XPMEM-Based Collectives
MiniAMR (Broadwell, ppn=16)
• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce
• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for
MiniAMR application kernel
0
200
400
600
800
28 56 112 224
ExecutionTime(s)
No. of Processes
IMPI-2017v1.132
MVAPICH2-2.3b
MVAPICH2-Opt
CNTK AlexNet Training
(Broadwell, B.S=default, iteration=50, ppn=28)
0
20
40
60
80
16 32 64 128 256
ExecutionTime(s)
No. of Processes
IMPI-2017v1.132
MVAPICH2-2.3b
MVAPICH2-Opt20%
9%
27%
15%
25. HPCAC-Stanford (Feb ’18) 25Network Based Computing Laboratory
Dynamic and Adaptive MPI Point-to-point Communication Protocols
Process on Node 1 Process on Node 2
Eager Threshold for Example Communication Pattern with Different Designs
0 1 2 3
4 5 6 7
Default
16 KB 16 KB 16 KB 16 KB
0 1 2 3
4 5 6 7
Manually Tuned
128 KB 128 KB 128 KB 128 KB
0 1 2 3
4 5 6 7
Dynamic + Adaptive
32 KB 64 KB 128 KB 32 KB
H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper
0
200
400
600
128 256 512 1K
WallClockTime(seconds)
Number of Processes
Execution Time of Amber
Default Threshold=17K Threshold=64K
Threshold=128K Dynamic Threshold
0
5
10
128 256 512 1K
RelativeMemory
Consumption
Number of Processes
Relative Memory Consumption of Amber
Default Threshold=17K Threshold=64K
Threshold=128K Dynamic Threshold
Default Poor overlap; Low memory requirement Low Performance; High Productivity
Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity
Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity
Process Pair Eager Threshold (KB)
0 – 4 32
1 – 5 64
2 – 6 128
3 – 7 32
Desired Eager Threshold
Will be available
in future
26. HPCAC-Stanford (Feb ’18) 26Network Based Computing Laboratory
Dynamic and Adaptive Tag Matching
Normalized Total Tag Matching Time at 512 Processes
Normalized to Default (Lower is Better)
Normalized Memory Overhead per Process at 512 Processes
Compared to Default (Lower is Better)
Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]
Challenge
Tag matching is a significant
overhead for receivers
Existing Solutions are
- Static and do not adapt
dynamically to
communication pattern
- Do not consider memory
overhead
Solution
A new tag matching design
- Dynamically adapt to
communication patterns
- Use different strategies for
different ranks
- Decisions are based on the
number of request object
that must be traversed
before hitting on the
required one
Results
Better performance than
other state-of-the art tag-
matching schemes
Minimum memory
consumption
Will be available in future
MVAPICH2 releases
27. HPCAC-Stanford (Feb ’18) 27Network Based Computing Laboratory
● Enhance existing support for MPI_T in MVAPICH2 to expose a richer
set of performance and control variables
● Get and display MPI Performance Variables (PVARs) made available by
the runtime in TAU
● Control the runtime’s behavior via MPI Control Variables (CVARs)
● Introduced support for new MPI_T based CVARs to MVAPICH2
○ MPIR_CVAR_MAX_INLINE_MSG_SZ,
MPIR_CVAR_VBUF_POOL_SIZE,
MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE
● TAU enhanced with support for setting MPI_T CVARs in a non-
interactive mode for uninstrumented applications
Performance Engineering Applications using MVAPICH2 and TAU
VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf
S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU,
Euro MPI, 2017 [Best Paper Award]
28. HPCAC-Stanford (Feb ’18) 28Network Based Computing Laboratory
Overview of OSU INAM
• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the
MPI runtime
– http://mvapich.cse.ohio-state.edu/tools/osu-inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• OSU INAM v0.9.2 released on 10/31/2017
• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes
• Improve database insert times by using 'bulk inserts‘
• Capability to look up list of nodes communicating through a network link
• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with
MVAPICH2-X 2.3b
• “Best practices “ guidelines for deploying OSU INAM on different clusters
• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication
– Point-to-Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
29. HPCAC-Stanford (Feb ’18) 29Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters
• Visualize traffic pattern on different links
• Quickly identify congested links/links in error state
• See the history unfold – play back historical state of the network
Comet@SDSC --- Clustered View
(1,879 nodes, 212 switches, 4,377 network links)
Finding Routes Between Nodes
30. HPCAC-Stanford (Feb ’18) 30Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view
• Show different network metrics (load, error, etc.) for any live job
• Play back historical data for completed jobs to identify bottlenecks
• Node level view - details per process or per node
• CPU utilization for each rank/node
• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)
• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view
• Classify data flowing over a network link at
different granularity in conjunction with
MVAPICH2-X 2.2rc1
• Job level and
• Process level
31. HPCAC-Stanford (Feb ’18) 31Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
32. HPCAC-Stanford (Feb ’18) 32Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI
– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF
– Available with since 2012 (starting with MVAPICH2-X 1.9)
– http://mvapich.cse.ohio-state.edu
33. HPCAC-Stanford (Feb ’18) 33Network Based Computing Laboratory
Application Level Performance with Graph500 and Sort
Graph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,
International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,
Int'l Conference on Parallel Processing (ICPP '12), September 2012
0
5
10
15
20
25
30
35
4K 8K 16K
Time(s)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design
• 8,192 processes
- 2.4X improvement over MPI-CSR
- 7.6X improvement over MPI-Simple
• 16,384 processes
- 1.5X improvement over MPI-CSR
- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned
Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Time(seconds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort
Application
• 4,096 processes, 4 TB Input Size
- MPI – 2408 sec; 0.16 TB/min
- Hybrid – 1172 sec; 0.36 TB/min
- 51% improvement over MPI-design
34. HPCAC-Stanford (Feb ’18) 34Network Based Computing Laboratory
Optimized OpenSHMEM with AVX and MCDRAM: Application Kernels
Evaluation
Heat Image Kernel
• On heat diffusion based kernels AVX-512 vectorization showed better performance
• MCDRAM showed significant benefits on Heat-Image kernel for all process counts.
Combined with AVX-512 vectorization, it showed up to 4X improved performance
1
10
100
1000
16 32 64 128
Time(s)
No. of processes
KNL (Default)
KNL (AVX-512)
KNL (AVX-512+MCDRAM)
Broadwell
Heat-2D Kernel using Jacobi method
0.1
1
10
100
16 32 64 128
Time(s)
No. of processes
KNL (Default)
KNL (AVX-512)
KNL (AVX-512+MCDRAM)
Broadwell
35. HPCAC-Stanford (Feb ’18) 35Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
– CUDA-aware MPI
– Optimized GPUDirect RDMA (GDR) Support
– Support for Streaming Applications
• Optimized MVAPICH2 for OpenPower and ARM
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
36. HPCAC-Stanford (Feb ’18) 36Network Based Computing Laboratory
At Sender:
At Receiver:
MPI_Recv(r_devbuf, size, …);
inside
MVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
37. HPCAC-Stanford (Feb ’18) 37Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases
• Support for MPI communication from NVIDIA GPU device memory
• High performance RDMA-based inter-node point-to-point communication
(GPU-GPU, GPU-Host and Host-GPU)
• High performance intra-node point-to-point communication for multi-GPU
adapters/node (GPU-GPU, GPU-Host and Host-GPU)
• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node
• Optimized and tuned collectives for GPU device buffers
• MPI datatype support for point-to-point and collective communication from
GPU device buffers
• Unified memory
40. HPCAC-Stanford (Feb ’18) 40Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96NormalizedExecutionTime
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32
NormalizedExecutionTime
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes
• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data
Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content
/tasks/operational/meteoSwiss/
41. HPCAC-Stanford (Feb ’18) 41Network Based Computing Laboratory
• Streaming applications on GPU clusters
– Using a pipeline of broadcast operations to move host-
resident data from a single source—typically live— to
multiple GPU-based computing sites
– Existing schemes require explicitly data movements between
Host and GPU memories
Poor performance and breaking the pipeline
• IB hardware multicast + Scatter-List
– Efficient heterogeneous-buffer broadcast operation
• CUDA Inter-Process Communication (IPC)
– Efficient intra-node topology-aware broadcast
operations for multi-GPU systems
• Available MVAPICH2-GDR 2.3a!
High-Performance Heterogeneous Broadcast for Streaming Applications
Node N
IB
HCA
IB
HCA
CPU
GPU
Source
IB
Switch
GPU
CPU
Node 1
Multicast steps
C
Data
C
IB SL step
Data
IB
HCA
GPU
CPU
Data
C
Node N
Node 1
IB
Switch
GPU 0 GPU 1 GPU N
GPU
CPU
Source
GPU
CPU
CPU
Multicast steps
IPC-based cudaMemcpy
(Device<->Device)
3
Designing High Performance Heterogeneous Broadcast for Streaming Applications on
GPU Clusters. C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh , B. Elton, and D. K.
Panda, SBAC-PAD'16, Oct 2016.
42. HPCAC-Stanford (Feb ’18) 42Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
46. HPCAC-Stanford (Feb ’18) 46Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Optimized MVAPICH2 for OpenPower and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
48. HPCAC-Stanford (Feb ’18) 48Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 lammps wrf2 tera_tf lu
ExecutionTimein(S)
Intel MPI 18.0.0
MVAPICH2 2.3 rc1
2%
6%
1%
6%
Performance of SPEC MPI 2007 Benchmarks (KNL + Omni-Path)
Mvapich2 outperforms Intel MPI by up to 10%
448 processes
on 7 KNL nodes of
TACC Stampede2
(64 ppn)
10%
2%
4%
49. HPCAC-Stanford (Feb ’18) 49Network Based Computing Laboratory
0
20
40
60
80
100
120
milc leslie3d pop2 lammps wrf2 GaP tera_tf lu
ExecutionTimein(S)
Intel MPI 18.0.0
MVAPICH2 2.3 rc1
2%
4%
Performance of SPEC MPI 2007 Benchmarks (Skylake + Omni-Path)
MVAPICH2 outperforms Intel MPI by up to 38%
480 processes
on 10 Skylake nodes
of TACC Stampede2
(48 ppn)
0% 1%
0%
-4%
38%
-3%
50. HPCAC-Stanford (Feb ’18) 50Network Based Computing Laboratory
• MPI runtime has many parameters
• Tuning a set of parameters can help you to extract higher performance
• Compiled a list of such contributions through the MVAPICH Website
– http://mvapich.cse.ohio-state.edu/best_practices/
• Initial list of applications
– Amber
– HoomDBlue
– HPCG
– Lulesh
– MILC
– Neuron
– SMG2000
• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.
• We will link these results with credits to you.
Compilation of Best Practices
51. HPCAC-Stanford (Feb ’18) 51Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC
– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
52. HPCAC-Stanford (Feb ’18) 52Network Based Computing Laboratory
• Deep Learning frameworks are a different game
altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State-of-the-art
– cuDNN, cuBLAS, NCCL --> scale-up performance
– CUDA-Aware MPI --> scale-out performance
• For small and medium message sizes only!
• Proposed: Can we co-design the MPI runtime (MVAPICH2-
GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large-Message Communication (Reductions)
– What application co-designs are needed to exploit
communication-runtime co-designs?
Deep Learning: New Challenges for MPI Runtimes
Scale-upPerformance
Scale-out Performance
cuDNN
NCCL
gRPC
Hadoop
Proposed
Co-Designs
MPI
cuBLAS
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU
Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
53. HPCAC-Stanford (Feb ’18) 53Network Based Computing Laboratory
• NCCL 1.x had some limitations
– Only worked for a single node; no scale-out on multiple nodes
– Degradation across IOH (socket) for scale-up (within a node)
• We propose optimized MPI_Bcast design that exploits NCCL [1]
– Communication of very large GPU buffers
– Scale-out on large number of dense multi-GPU nodes
Efficient Broadcast: MVAPICH2-GDR and NCCL
1. A. A. Awan, K. Hamidouche, A. Venkatesh, and D. K. Panda, Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep
Learning. In Proceedings of the 23rd European MPI Users' Group Meeting (EuroMPI 2016). [Best Paper Nominee]
2. A. A. Awan, C-H. Chu, H. Subramoni, and D. K. Panda. Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters:
MPI or NCCL?, arXiv ’17 (https://arxiv.org/abs/1707.09414)
1
32
4
Internode Comm. (Knomial)
1 2
CPU
PLX
3 4
PLX
Intranode Comm.
(NCCL Ring)
Ring Direction
0
5
10
15
20
25
30
2 4 8 32 64 128
TrainingTime(seconds)
No. of GPUs
MV2-GDR-NCCL MV2-GDR-Opt
• Hierarchical Communication that efficiently exploits:
– CUDA-Aware MPI_Bcast in MV2-GDR
– NCCL Broadcast for intra-node transfers
• Can pure MPI-level designs be done that achieve similar
or better performance than NCCL-based approach? [2]
0
5
10
15
20
25
30
2 4 8 16 32 64
TrainingTime(seconds)
No. of GPUs
MV2-GDR MV2-GDR-NCCL
VGG Training with CNTK
54. HPCAC-Stanford (Feb ’18) 54Network Based Computing Laboratory
0
100
200
300
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Reduce – 192 GPUs
Large Message Optimized Collectives for Deep Learning
0
100
200
128 160 192
Latency(ms)
No. of GPUs
Reduce – 64 MB
0
100
200
300
16 32 64
Latency(ms)
No. of GPUs
Allreduce - 128 MB
0
50
100
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Bcast – 64 GPUs
0
50
100
16 32 64
Latency(ms)
No. of GPUs
Bcast 128 MB
• MV2-GDR provides
optimized collectives for
large message sizes
• Optimized Reduce,
Allreduce, and Bcast
• Good scaling with large
number of GPUs
• Available since MVAPICH2-
GDR 2.2GA
0
100
200
300
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Allreduce – 64 GPUs
56. HPCAC-Stanford (Feb ’18) 56Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning framework.
• Benefits and Weaknesses
– Multi-GPU Training within a single node
– Performance degradation for GPUs across different
sockets
– Limited Scale-out
• OSU-Caffe: MPI-based Parallel Training
– Enable Scale-up (within a node) and Scale-out (across
multi-GPU nodes)
– Scale-out on 64 GPUs for training CIFAR-10 network on
CIFAR-10 dataset
– Scale-out on 128 GPUs for training GoogLeNet network on
ImageNet dataset
OSU-Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
TrainingTime(seconds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU-Caffe (1024) OSU-Caffe (2048)
Invalid use case
OSU-Caffe publicly available from
http://hidl.cse.ohio-state.edu/
57. HPCAC-Stanford (Feb ’18) 57Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC
– Virtualization with SR-IOV and Containers
HPC, Big Data, Deep Learning, and Cloud
58. HPCAC-Stanford (Feb ’18) 58Network Based Computing Laboratory
• Virtualization has many benefits
– Fault-tolerance
– Job migration
– Compaction
• Have not been very popular in HPC due to overhead associated with
Virtualization
• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox
InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR-IOV
• MVAPICH2-Virt 2.2 supports:
– OpenStack, Docker, and singularity
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based
Virtualized InfiniBand Clusters? EuroPar'14
J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand
Clusters, HiPC’14
J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build
HPC Clouds, CCGrid’15
59. HPCAC-Stanford (Feb ’18) 59Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
ExecutionTime(s)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
1%
9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
ExecutionTime(ms)
Problem Size (Scale, Edgefactor)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2-5% overhead for Graph500 with 128 Procs
• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs
Application-Level Performance on Chameleon
SPEC MPI2007Graph500
5%
60. HPCAC-Stanford (Feb ’18) 60Network Based Computing Laboratory
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
ExecutionTime(s)
Container-Def
Container-Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container-Def, up to 11% and 73% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 5% overhead for NAS and Graph 500
Application-Level Performance on Docker with MVAPICH2
Graph 500
NAS
11%
0
50
100
150
200
250
300
1Cont*16P 2Conts*8P 4Conts*4P
BFSExecutionTime(ms)
Scale, Edgefactor (20,16)
Container-Def
Container-Opt
Native
73%
61. HPCAC-Stanford (Feb ’18) 61Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
22,16 22,20 24,16 24,20 26,16 26,20
BFSExecutionTime(ms)
Problem Size (Scale, Edgefactor)
Graph500
Singularity
Native
0
50
100
150
200
250
300
CG EP FT IS LU MG
ExecutionTime(s)
NPB Class D
Singularity
Native
• 512 Processes across 32 nodes
• Less than 7% and 6% overhead for NPB and Graph500, respectively
Application-Level Performance on Singularity with MVAPICH2
7%
6%
J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?,
UCC ’17, Best Student Paper Award
62. HPCAC-Stanford (Feb ’18) 62Network Based Computing Laboratory
High Performance SR-IOV enabled VM Migration Framework
J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters.
IPDPS, 2017
• Consist of SR-IOV enabled IB Cluster and External
Migration Controller
• Detachment/Re-attachment of virtualized
devices: Multiple parallel libraries to coordinate
VM during migration (detach/reattach SR-
IOV/IVShmem, migrate VMs, migration status)
• IB Connection: MPI runtime handles IB
connection suspending and reactivating
• Propose Progress Engine (PE) and Migration
Thread based (MT) design to optimize VM
migration and MPI application performance
63. HPCAC-Stanford (Feb ’18) 63Network Based Computing Laboratory
Bcast (4VMs * 2Procs/VM)
• Migrate a VM from one machine to another while benchmark is running inside
• Proposed MT-based designs perform slightly worse than PE-based designs because of lock/unlock
• No benefit from MT because of NO computation involved
Performance Evaluation of VM Migration Framework
Pt2Pt Latency
0
20
40
60
80
100
120
140
160
180
200
1 4 16 64 256 1K 4K 16K 64K 256K 1M
Latency(us)
Message Size ( bytes)
PE-IPoIB
PE-RDMA
MT-IPoIB
MT-RDMA
0
100
200
300
400
500
600
700
1 4 16 64 256 1K 4K 16K 64K 256K 1M
2Latency(us)
Message Size ( bytes)
PE-IPoIB
PE-RDMA
MT-IPoIB
MT-RDMA
Will be available in upcoming MVAPICH2-Virt
64. HPCAC-Stanford (Feb ’18) 64Network Based Computing Laboratory
• Next generation HPC systems need to be designed with a holistic view of Deep
Learning and Cloud
• Presented some of the approaches and results along these directions
• Enable Deep Learning and Cloud community to take advantage of modern HPC
technologies
• Many other open issues need to be solved
Concluding Remarks
65. HPCAC-Stanford (Feb ’18) 65Network Based Computing Laboratory
Funding Acknowledgments
Funding Support by
Equipment Support by
66. HPCAC-Stanford (Feb ’18) 66Network Based Computing Laboratory
Personnel Acknowledgments
Current Students (Graduate)
– A. Awan (Ph.D.)
– R. Biswas (M.S.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.-H. Chu (Ph.D.)
– S. Guganani (Ph.D.)
Past Students
– A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist
– K. Hamidouche
– S. Sur
Past Post-Docs
– D. Banerjee
– X. Besseron
– H.-W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– M. Li (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– J. Hashmi (Ph.D.)
– H. Javed (Ph.D.)
– P. Kousha (Ph.D.)
– D. Shankar (Ph.D.)
– H. Shi (Ph.D.)
– J. Zhang (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists
– X. Lu
– H. Subramoni
Past Programmers
– D. Bureddy
– J. Perkins
Current Research Specialist
– J. Smith
– M. Arnold
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post-doc
– A. Ruhela
Current Students (Undergraduate)
– N. Sarkauskas (B.S.)
67. HPCAC-Stanford (Feb ’18) 67Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
panda@cse.ohio-state.edu
The High-Performance MPI/PGAS Project
http://mvapich.cse.ohio-state.edu/
The High-Performance Deep Learning Project
http://hidl.cse.ohio-state.edu/
The High-Performance Big Data Project
http://hibd.cse.ohio-state.edu/
Notes de l'éditeur
This one can be updated with the latest version of GDR