Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×
Prochain SlideShare
GPU: Understanding CUDA
GPU: Understanding CUDA
Chargement dans…3

Consultez-les par la suite

1 sur 14
1 sur 14

Plus De Contenu Connexe


  1. 1. Rachel Miller Research Computing Lab
  2. 2. <ul><li>CUDA is a programming language that uses the Graphical Processing Unit (GPU) </li></ul><ul><li>Allows calculations to be performed in parallel, giving significant speedup </li></ul><ul><li>Used with C programs </li></ul>http://members.tripod.com/~Michael_Art/Animal_Fun/Baracuda.gif
  3. 3. <ul><li>GPUs are designed to make high speed parallel calculations for displaying graphics, such as games </li></ul><ul><li>Use available resources! Over 100 million GPUs are already deployed </li></ul><ul><li>30-100x Speed-up over other microprocessors for some applications </li></ul>
  4. 4. <ul><li>GPUs have lots of small Arithmetic Logic Units (ALUs), compared to a few larger ones on the CPU </li></ul><ul><li>This allows for many parallel computations, like calculating a color for each pixel on the screen </li></ul>Image from NVIDIA CUDA Programming Guide
  5. 5. <ul><li>GPUs run one kernel (a group of work) at a time </li></ul><ul><li>Each kernel has blocks , which are independent groups of ALUs </li></ul><ul><li>Each block is comprised of threads , which are the level of computation </li></ul><ul><li>The threads in each block typically work together to compute a value </li></ul>Image from NVIDA Host Kernel 1 Kernel 2 Device Grid 1 Block (0, 0) Block (1, 0) Block (2, 0) Block (0, 1) Block (1, 1) Block (2, 1) Grid 2 Block (1, 1) Thread (0, 1) Thread (1, 1) Thread (2, 1) Thread (3, 1) Thread (4, 1) Thread (0, 2) Thread (1, 2) Thread (2, 2) Thread (3, 2) Thread (4, 2) Thread (0, 0) Thread (1, 0) Thread (2, 0) Thread (3, 0) Thread (4, 0)
  6. 6. <ul><li>Threads within the same block can share memory </li></ul><ul><li>In CUDA, sending information from the CPU to the GPU is often the most expensive part of the calculation </li></ul><ul><li>For each thread , local memory is fastest, followed by shared memory; global, constant and texture memory are all slowest </li></ul>Image from NVIDA (Device) Grid Constant Memory Texture Memory Global Memory Block (0, 0) Shared Memory Local Memory Thread (0, 0) Registers Local Memory Thread (1, 0) Registers Block (1, 0) Shared Memory Local Memory Thread (0, 0) Registers Local Memory Thread (1, 0) Registers Host
  7. 7. <ul><li>Each thread “knows” the x and y coordinates of the block it is in, and the coordinates of where it is in the block </li></ul><ul><li>These positions can be used to compute a unique thread ID for each thread </li></ul><ul><li>The computational work done will depend on the value of this thread ID </li></ul><ul><li>Example: the thread ID corresponds to a group of matrix elements </li></ul>
  8. 8. <ul><li>All threads in a block will run in parallel IF they are all following the same code; its important to eliminate logical branches, to keep all threads running at the same time </li></ul><ul><li>Threads can only reference local memory and shared memory, so any needed information should be put into shared memory </li></ul>
  9. 9. <ul><li>CUDA applications should run parallel operations on lots of data, and be processing intensive </li></ul><ul><li>Examples: </li></ul><ul><ul><li>Molecular Dynamics Simulation </li></ul></ul><ul><ul><li>Video/Audio Encoding, Manipulation </li></ul></ul><ul><ul><li>3D Imaging and Visualization </li></ul></ul><ul><ul><li>Matrix Operations </li></ul></ul>
  10. 10. <ul><li>These collisions of thousands of tiny balls runs real time on a desktop computer! (And looks better there, too.) </li></ul>
  11. 11. <ul><li>Watch a better version at </li></ul><ul><li>http://www.youtube.com/watch?v=RqduA7myZok </li></ul>
  12. 12. <ul><li>Over 170 premade CUDA tools exist, and would be useful building blocks for applications </li></ul><ul><ul><li>Areas include Imaging, Video & Audio Processing, Molecular Dynamics, Signal Processing </li></ul></ul><ul><li>CUDA can also help an existing application meet its need for speed </li></ul><ul><ul><li>Process huge datasets faster </li></ul></ul><ul><ul><li>Can achieve close to real time data processing </li></ul></ul>
  13. 13. <ul><li>Nvidia (the makers of CUDA) created a MATLAB plug-in for accelerating standard MATLAB 2D FFTs </li></ul><ul><li>CUDA has a graphics toolbox for MATLAB </li></ul><ul><li>More MATLAB plug-ins to come! </li></ul>