SlideShare une entreprise Scribd logo
1  sur  62
Télécharger pour lire hors ligne
Introduction to CUDA C
What is CUDA?
 CUDA Architecture
— Expose general-purpose GPU computing as first-class capability
— Retain traditional DirectX/OpenGL graphics performance
 CUDA C
— Based on industry-standard C
— A handful of language extensions to allow heterogeneous programs
— Straightforward APIs to manage devices, memory, etc.
 This talk will introduce you to CUDA C
Introduction to CUDA C
 What will you learn today?
— Start from ―Hello, World!‖
— Write and launch CUDA C kernels
— Manage GPU memory
— Run parallel kernels in CUDA C
— Parallel communication and synchronization
— Race conditions and atomic operations
CUDA C Prerequisites
 You (probably) need experience with C or C++
 You do not need any GPU experience
 You do not need any graphics experience
 You do not need any parallel programming experience
CUDA C: The Basics
Host
Note: Figure Not to Scale
 Terminology
 Host – The CPU and its memory (host memory)
 Device – The GPU and its memory (device memory)
Device
Hello, World!
int main( void ) {
printf( "Hello, World!n" );
return 0;
}
 This basic program is just standard C that runs on the host
 NVIDIA’s compiler (nvcc) will not complain about CUDA programs
with no device code
 At its simplest, CUDA C is just C!
Hello, World! with Device Code
__global__ void kernel( void ) {
}
int main( void ) {
kernel<<<1,1>>>();
printf( "Hello, World!n" );
return 0;
}
 Two notable additions to the original ―Hello, World!‖
Hello, World! with Device Code
__global__ void kernel( void ) {
}
 CUDA C keyword __global__ indicates that a function
— Runs on the device
— Called from host code
 nvcc splits source file into host and device components
— NVIDIA’s compiler handles device functions like kernel()
— Standard host compiler handles host functions like main()
 gcc
 Microsoft Visual C
Hello, World! with Device Code
int main( void ) {
kernel<<< 1, 1 >>>();
printf( "Hello, World!n" );
return 0;
}
 Triple angle brackets mark a call from host code to device code
— Sometimes called a ―kernel launch‖
— We’ll discuss the parameters inside the angle brackets later
 This is all that’s required to execute a function on the GPU!
 The function kernel() does nothing, so this is fairly anticlimactic…
A More Complex Example
 A simple kernel to add two integers:
__global__ void add( int *a, int *b, int *c ) {
*c = *a + *b;
}
 As before, __global__ is a CUDA C keyword meaning
— add() will execute on the device
— add() will be called from the host
A More Complex Example
 Notice that we use pointers for our variables:
__global__ void add( int *a, int *b, int *c ) {
*c = *a + *b;
}
 add() runs on the device…so a, b, and c must point to
device memory
 How do we allocate memory on the GPU?
Memory Management
 Host and device memory are distinct entities
— Device pointers point to GPU memory
 May be passed to and from host code
 May not be dereferenced from host code
— Host pointers point to CPU memory
 May be passed to and from device code
 May not be dereferenced from device code
 Basic CUDA API for dealing with device memory
— cudaMalloc(), cudaFree(), cudaMemcpy()
— Similar to their C equivalents, malloc(), free(), memcpy()
A More Complex Example: add()
 Using our add()kernel:
__global__ void add( int *a, int *b, int *c ) {
*c = *a + *b;
}
 Let’s take a look at main()…
A More Complex Example: main()
int main( void ) {
int a, b, c; // host copies of a, b, c
int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
int size = sizeof( int ); // we need space for an integer
// allocate device copies of a, b, c
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, size );
a = 2;
b = 7;
A More Complex Example: main() (cont)
// copy inputs to device
cudaMemcpy( dev_a, &a, size, cudaMemcpyHostToDevice );
cudaMemcpy( dev_b, &b, size, cudaMemcpyHostToDevice );
// launch add() kernel on GPU, passing parameters
add<<< 1, 1 >>>( dev_a, dev_b, dev_c );
// copy device result back to host copy of c
cudaMemcpy( &c, dev_c, size, cudaMemcpyDeviceToHost );
cudaFree( dev_a );
cudaFree( dev_b );
cudaFree( dev_c );
return 0;
}
Parallel Programming in CUDA C
 But wait…GPU computing is about massive parallelism
 So how do we run code in parallel on the device?
 Solution lies in the parameters between the triple angle brackets:
add<<< 1, 1 >>>( dev_a, dev_b, dev_c );
add<<< N, 1 >>>( dev_a, dev_b, dev_c );
 Instead of executing add() once, add() executed N times in parallel
Parallel Programming in CUDA C
 With add() running in parallel…let’s do vector addition
 Terminology: Each parallel invocation of add() referred to as a block
 Kernel can refer to its block’s index with the variable blockIdx.x
 Each block adds a value from a[] and b[], storing the result in c[]:
__global__ void add( int *a, int *b, int *c ) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
 By using blockIdx.x to index arrays, each block handles different indices
Parallel Programming in CUDA C
Block 1
c[1] = a[1] + b[1];
 We write this code:
__global__ void add( int *a, int *b, int *c ) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
 This is what runs in parallel on the device:
Block 0
c[0] = a[0] + b[0];
Block 2
c[2] = a[2] + b[2];
Block 3
c[3] = a[3] + b[3];
Parallel Addition: add()
 Using our newly parallelized add()kernel:
__global__ void add( int *a, int *b, int *c ) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
 Let’s take a look at main()…
Parallel Addition: main()
#define N 512
int main( void ) {
int *a, *b, *c; // host copies of a, b, c
int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
int size = N * sizeof( int ); // we need space for 512 integers
// allocate device copies of a, b, c
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, size );
a = (int*)malloc( size );
b = (int*)malloc( size );
c = (int*)malloc( size );
random_ints( a, N );
random_ints( b, N );
Parallel Addition: main() (cont)
// copy inputs to device
cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
// launch add() kernel with N parallel blocks
add<<< N, 1 >>>( dev_a, dev_b, dev_c );
// copy device result back to host copy of c
cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost );
free( a ); free( b ); free( c );
cudaFree( dev_a );
cudaFree( dev_b );
cudaFree( dev_c );
return 0;
}
Review
 Difference between ―host‖ and ―device‖
— Host = CPU
— Device = GPU
 Using __global__ to declare a function as device code
— Runs on device
— Called from host
 Passing parameters from host code to a device function
Review (cont)
 Basic device memory management
— cudaMalloc()
— cudaMemcpy()
— cudaFree()
 Launching parallel kernels
— Launch N copies of add() with: add<<< N, 1 >>>();
— Used blockIdx.x to access block’s index
Threads
 Terminology: A block can be split into parallel threads
 Let’s change vector addition to use parallel threads instead of parallel blocks:
__global__ void add( int *a, int *b, int *c ) {
c[ ] = a[ ] + b[ ];
}
 We use threadIdx.x instead of blockIdx.x in add()
 main() will require one change as well…
threadIdx.x threadIdx.x threadIdx.xblockIdx.x blockIdx.x blockIdx.x
Parallel Addition (Threads): main()
#define N 512
int main( void ) {
int *a, *b, *c; //host copies of a, b, c
int *dev_a, *dev_b, *dev_c; //device copies of a, b, c
int size = N * sizeof( int ); //we need space for 512 integers
// allocate device copies of a, b, c
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, size );
a = (int*)malloc( size );
b = (int*)malloc( size );
c = (int*)malloc( size );
random_ints( a, N );
random_ints( b, N );
Parallel Addition (Threads): main() (cont)
// copy inputs to device
cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
// launch add() kernel with N
add<<< >>>( dev_a, dev_b, dev_c );
// copy device result back to host copy of c
cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost );
free( a ); free( b ); free( c );
cudaFree( dev_a );
cudaFree( dev_b );
cudaFree( dev_c );
return 0;
}
threads
1, N
blocks
N, 1
Using Threads And Blocks
 We’ve seen parallel vector addition using
— Many blocks with 1 thread apiece
— 1 block with many threads
 Let’s adapt vector addition to use lots of both blocks and threads
 After using threads and blocks together, we’ll talk about why threads
 First let’s discuss data indexing…
Indexing Arrays With Threads And Blocks
 No longer as simple as just using threadIdx.x or blockIdx.x as indices
 To index array with 1 thread per entry (using 8 threads/block)
 If we have M threads/block, a unique array index for each entry given by
int index = threadIdx.x + blockIdx.x * M;
int index = x + y * width;
blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3
threadIdx.x
0 1 2 3 4 5 6 7
threadIdx.x
0 1 2 3 4 5 6 7
threadIdx.x
0 1 2 3 4 5 6 7
threadIdx.x
0 1 2 3 4 5 6 7
Indexing Arrays: Example
 In this example, the red entry would have an index of 21:
int index = threadIdx.x + blockIdx.x * M;
= 5 + 2 * 8;
= 21;
blockIdx.x = 2
M = 8 threads/block
0 178 16 18 19 20 2121 3 4 5 6 7 109 11 12 13 14 15
Addition with Threads and Blocks
 The blockDim.x is a built-in variable for threads per block:
int index= threadIdx.x + blockIdx.x * blockDim.x;
 A combined version of our vector addition kernel to use blocks and threads:
__global__ void add( int *a, int *b, int *c ) {
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
}
 So what changes in main() when we use both blocks and threads?
Parallel Addition (Blocks/Threads): main()
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
int main( void ) {
int *a, *b, *c; // host copies of a, b, c
int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
int size = N * sizeof( int ); // we need space for N integers
// allocate device copies of a, b, c
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, size );
a = (int*)malloc( size );
b = (int*)malloc( size );
c = (int*)malloc( size );
random_ints( a, N );
random_ints( b, N );
Parallel Addition (Blocks/Threads): main()
// copy inputs to device
cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
// launch add() kernel with blocks and threads
add<<< N/THREADS_PER_BLOCK, THREADS_PER_BLOCK >>>( dev_a, dev_b, dev_c );
// copy device result back to host copy of c
cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost );
free( a ); free( b ); free( c );
cudaFree( dev_a );
cudaFree( dev_b );
cudaFree( dev_c );
return 0;
}
Why Bother With Threads?
 Threads seem unnecessary
— Added a level of abstraction and complexity
— What did we gain?
 Unlike parallel blocks, parallel threads have mechanisms to
— Communicate
— Synchronize
 Let’s see how…
Dot Product
 Unlike vector addition, dot product is a reduction from vectors to a scalar
c = a ∙ b
c = (a0, a1, a2, a3) ∙ (b0, b1, b2, b3)
c = a0 b0 + a1 b1 + a2 b2 + a3 b3
a0
a1
a2
a3
b0
b1
b2
b3
*
*
*
*
+
a b
c
Dot Product
 Parallel threads have no problem computing the pairwise products:
 So we can start a dot product CUDA kernel by doing just that:
__global__ void dot( int *a, int *b, int *c ) {
// Each thread computes a pairwise product
int temp = a[threadIdx.x] * b[threadIdx.x];
a0
a1
a2
a3
b0
b1
b2
b3
*
*
*
*
+
a b
Dot Product
 But we need to share data between threads to compute the final sum:
__global__ void dot( int *a, int *b, int *c ) {
// Each thread computes a pairwise product
int temp = a[threadIdx.x] * b[threadIdx.x];
// Can’t compute the final sum
// Each thread’s copy of ‘temp’ is private
}
a0
a1
a2
a3
b0
b1
b2
b3
*
*
*
*
+
a b
Sharing Data Between Threads
 Terminology: A block of threads shares memory called…
 Extremely fast, on-chip memory (user-managed cache)
 Declared with the __shared__ CUDA keyword
 Not visible to threads in other blocks running in parallel
shared memory
Shared Memory
Threads
Block 0
Shared Memory
Threads
Block 1
Shared Memory
Threads
Block 2
…
Parallel Dot Product: dot()
 We perform parallel multiplication, serial addition:
#define N 512
__global__ void dot( int *a, int *b, int *c ) {
// Shared memory for results of multiplication
__shared__ int temp[N];
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];
// Thread 0 sums the pairwise products
if( 0 == threadIdx.x ) {
int sum = 0;
for( int i = 0; i < N; i++ )
sum += temp[i];
*c = sum;
}
}
Parallel Dot Product Recap
 We perform parallel, pairwise multiplications
 Shared memory stores each thread’s result
 We sum these pairwise products from a single thread
 Sounds good…but we’ve made a huge mistake
Faulty Dot Product Exposed!
 Step 1: In parallel, each thread writes a pairwise product
 Step 2: Thread 0 reads and sums the products
 But there’s an assumption hidden in Step 1…
__shared__ int temp
__shared__ int temp
In parallel
Read-Before-Write Hazard
 Suppose thread 0 finishes its write in step 1
 Then thread 0 reads index 12 in step 2
 Before thread 12 writes to index 12 in step 1?
This read returns garbage!
Synchronization
 We need threads to wait between the sections of dot():
__global__ void dot( int *a, int *b, int *c ) {
__shared__ int temp[N];
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];
// * NEED THREADS TO SYNCHRONIZE HERE *
// No thread can advance until all threads
// have reached this point in the code
// Thread 0 sums the pairwise products
if( 0 == threadIdx.x ) {
int sum = 0;
for( int i = 0; i < N; i++ )
sum += temp[i];
*c = sum;
}
}
__syncthreads()
 We can synchronize threads with the function __syncthreads()
 Threads in the block wait until all threads have hit the __syncthreads()
 Threads are only synchronized within a block
__syncthreads()
__syncthreads()
__syncthreads()
__syncthreads()
__syncthreads()
Thread 0
Thread 1
Thread 2
Thread 3
Thread 4
…
Parallel Dot Product: dot()
__global__ void dot( int *a, int *b, int *c ) {
__shared__ int temp[N];
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];
__syncthreads();
if( 0 == threadIdx.x ) {
int sum = 0;
for( int i = 0; i < N; i++ )
sum += temp[i];
*c = sum;
}
}
 With a properly synchronized dot() routine, let’s look at main()
Parallel Dot Product: main()
#define N 512
int main( void ) {
int *a, *b, *c; // copies of a, b, c
int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
int size = N * sizeof( int ); // we need space for 512 integers
// allocate device copies of a, b, c
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, sizeof( int ) );
a = (int *)malloc( size );
b = (int *)malloc( size );
c = (int *)malloc( sizeof( int ) );
random_ints( a, N );
random_ints( b, N );
Parallel Dot Product: main()
// copy inputs to device
cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
// launch dot() kernel with 1 block and N threads
dot<<< 1, N >>>( dev_a, dev_b, dev_c );
// copy device result back to host copy of c
cudaMemcpy( c, dev_c, sizeof( int ) , cudaMemcpyDeviceToHost );
free( a ); free( b ); free( c );
cudaFree( dev_a );
cudaFree( dev_b );
cudaFree( dev_c );
return 0;
}
Review
 Launching kernels with parallel threads
— Launch add() with N threads: add<<< 1, N >>>();
— Used threadIdx.x to access thread’s index
 Using both blocks and threads
— Used (threadIdx.x + blockIdx.x * blockDim.x) to index input/output
— N/THREADS_PER_BLOCK blocks and THREADS_PER_BLOCK threads gave us N threads total
Review (cont)
 Using __shared__ to declare memory as shared memory
— Data shared among threads in a block
— Not visible to threads in other parallel blocks
 Using __syncthreads() as a barrier
— No thread executes instructions after __syncthreads() until all
threads have reached the __syncthreads()
— Needs to be used to prevent data hazards
Multiblock Dot Product
 Recall our dot product launch:
// launch dot() kernel with 1 block and N threads
dot<<< 1, N >>>( dev_a, dev_b, dev_c );
 Launching with one block will not utilize much of the GPU
 Let’s write a multiblock version of dot product
Multiblock Dot Product: Algorithm
 Each block computes a sum of its pairwise products like before:
a0
a1
a2
a3
b0
b1
b2
b3
*
*
*
*
+
a b
…
…
sum
Block 0
a512
a513
a514
a515
b512
b513
b514
b515
*
*
*
*
+
a b
…
…
sum
Block 1
Multiblock Dot Product: Algorithm
 And then contributes its sum to the final result:
a0
a1
a2
a3
b0
b1
b2
b3
*
*
*
*
+
a b
…
…
sum
Block 0
a512
a513
a514
a515
b512
b513
b514
b515
*
*
*
*
+
a b
…
…
sum
Block 1
c
Multiblock Dot Product: dot()
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
__global__ void dot( int *a, int *b, int *c ) {
__shared__ int temp[THREADS_PER_BLOCK];
int index = threadIdx.x + blockIdx.x * blockDim.x;
temp[threadIdx.x] = a[index] * b[index];
__syncthreads();
if( 0 == threadIdx.x ) {
int sum = 0;
for( int i = 0; i < THREADS_PER_BLOCK; i++ )
sum += temp[i];
}
}
 But we have a race condition…
 We can fix it with one of CUDA’s atomic operations
*c += sum;atomicAdd( c , sum );
Race Conditions
 Thread 0, Block 1
— Read value at address c
— Add sum to value
— Write result to address c
 Terminology: A race condition occurs when program behavior depends upon
relative timing of two (or more) event sequences
 What actually takes place to execute the line in question: *c += sum;
— Read value at address c
— Add sum to value
— Write result to address c
 What if two threads are trying to do this at the same time?
 Thread 0, Block 0
— Read value at address c
— Add sum to value
— Write result to address c
Terminology: Read-Modify-Write
Global Memory Contention
0c 3
Block 0
sum = 3
Block 1
sum = 4
Reads 0
0
Computes 0+3
0+3 = 3 3
Writes 3
Reads 3
3
Computes 3+4
3+4 = 7 7
Writes 7
0 3 73
Read-Modify-Write
Read-Modify-Write
*c += sum
Global Memory Contention
0c 0
Block 0
sum = 3
Block 1
sum = 4
Reads 0
0
Computes 0+3
0+3 = 3 3
Writes 3
Reads 0
0
Computes 0+4
0+4 = 4 4
Writes 4
0 0 43
Read-Modify-Write
Read-Modify-Write
*c += sum
Atomic Operations
 Terminology: Read-modify-write uninterruptible when atomic
 Many atomic operations on memory available with CUDA C
 Predictable result when simultaneous access to memory required
 We need to atomically add sum to c in our multiblock dot product
 atomicAdd()
 atomicSub()
 atomicMin()
 atomicMax()
 atomicInc()
 atomicDec()
 atomicExch()
 atomicCAS()
Multiblock Dot Product: dot()
__global__ void dot( int *a, int *b, int *c ) {
__shared__ int temp[THREADS_PER_BLOCK];
int index = threadIdx.x + blockIdx.x * blockDim.x;
temp[threadIdx.x] = a[index] * b[index];
__syncthreads();
if( 0 == threadIdx.x ) {
int sum = 0;
for( int i = 0; i < THREADS_PER_BLOCK; i++ )
sum += temp[i];
atomicAdd( c , sum );
}
}
 Now let’s fix up main() to handle a multiblock dot product
Parallel Dot Product: main()
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
int main( void ) {
int *a, *b, *c; // host copies of a, b, c
int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
int size = N * sizeof( int ); // we need space for N ints
// allocate device copies of a, b, c
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, sizeof( int ) );
a = (int *)malloc( size );
b = (int *)malloc( size );
c = (int *)malloc( sizeof( int ) );
random_ints( a, N );
random_ints( b, N );
Parallel Dot Product: main()
// copy inputs to device
cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
// launch dot() kernel
dot<<< N/THREADS_PER_BLOCK, THREADS_PER_BLOCK >>>( dev_a, dev_b, dev_c );
// copy device result back to host copy of c
cudaMemcpy( c, dev_c, sizeof( int ) , cudaMemcpyDeviceToHost );
free( a ); free( b ); free( c );
cudaFree( dev_a );
cudaFree( dev_b );
cudaFree( dev_c );
return 0;
}
Review
 Race conditions
— Behavior depends upon relative timing of multiple event sequences
— Can occur when an implied read-modify-write is interruptible
 Atomic operations
— CUDA provides read-modify-write operations guaranteed to be atomic
— Atomics ensure correct results when multiple threads modify memory
To Learn More CUDA C
 Check out CUDA by Example
— Parallel Programming in CUDA C
— Thread Cooperation
— Constant Memory and Events
— Texture Memory
— Graphics Interoperability
— Atomics
— Streams
— CUDA C on Multiple GPUs
— Other CUDA Resources
 http://developer.nvidia.com/object/cuda-by-example.html
Questions
 First my questions
 Now your questions…

Contenu connexe

Tendances

NVidia CUDA Tutorial - June 15, 2009
NVidia CUDA Tutorial - June 15, 2009NVidia CUDA Tutorial - June 15, 2009
NVidia CUDA Tutorial - June 15, 2009Randall Hand
 
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
NVidia CUDA for Bruteforce Attacks - DefCamp 2012NVidia CUDA for Bruteforce Attacks - DefCamp 2012
NVidia CUDA for Bruteforce Attacks - DefCamp 2012DefCamp
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCinside-BigData.com
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
GPU Computing with Ruby
GPU Computing with RubyGPU Computing with Ruby
GPU Computing with RubyShin Yee Chung
 
Intro to GPGPU Programming with Cuda
Intro to GPGPU Programming with CudaIntro to GPGPU Programming with Cuda
Intro to GPGPU Programming with CudaRob Gillen
 
CUDA Deep Dive
CUDA Deep DiveCUDA Deep Dive
CUDA Deep Divekrasul
 
Gpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cudaGpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cudaFerdinand Jamitzky
 
Using GPUs for parallel processing
Using GPUs for parallel processingUsing GPUs for parallel processing
Using GPUs for parallel processingasm100
 

Tendances (18)

NVidia CUDA Tutorial - June 15, 2009
NVidia CUDA Tutorial - June 15, 2009NVidia CUDA Tutorial - June 15, 2009
NVidia CUDA Tutorial - June 15, 2009
 
CUDA
CUDACUDA
CUDA
 
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
NVidia CUDA for Bruteforce Attacks - DefCamp 2012NVidia CUDA for Bruteforce Attacks - DefCamp 2012
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
 
GPU: Understanding CUDA
GPU: Understanding CUDAGPU: Understanding CUDA
GPU: Understanding CUDA
 
Cuda
CudaCuda
Cuda
 
Cuda tutorial
Cuda tutorialCuda tutorial
Cuda tutorial
 
Cuda
CudaCuda
Cuda
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
 
Cuda Architecture
Cuda ArchitectureCuda Architecture
Cuda Architecture
 
Lecture 04
Lecture 04Lecture 04
Lecture 04
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Cuda
CudaCuda
Cuda
 
GPU Computing with CUDA
GPU Computing with CUDAGPU Computing with CUDA
GPU Computing with CUDA
 
GPU Computing with Ruby
GPU Computing with RubyGPU Computing with Ruby
GPU Computing with Ruby
 
Intro to GPGPU Programming with Cuda
Intro to GPGPU Programming with CudaIntro to GPGPU Programming with Cuda
Intro to GPGPU Programming with Cuda
 
CUDA Deep Dive
CUDA Deep DiveCUDA Deep Dive
CUDA Deep Dive
 
Gpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cudaGpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cuda
 
Using GPUs for parallel processing
Using GPUs for parallel processingUsing GPUs for parallel processing
Using GPUs for parallel processing
 

Similaire à Introduction to CUDA C: NVIDIA : Notes

Tema3_Introduction_to_CUDA_C.pdf
Tema3_Introduction_to_CUDA_C.pdfTema3_Introduction_to_CUDA_C.pdf
Tema3_Introduction_to_CUDA_C.pdfpepe464163
 
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...mouhouioui
 
introduction to CUDA_C.pptx it is widely used
introduction to CUDA_C.pptx it is widely usedintroduction to CUDA_C.pptx it is widely used
introduction to CUDA_C.pptx it is widely usedHimanshu577858
 
CUDA by Example : Parallel Programming in CUDA C : Notes
CUDA by Example : Parallel Programming in CUDA C : NotesCUDA by Example : Parallel Programming in CUDA C : Notes
CUDA by Example : Parallel Programming in CUDA C : NotesSubhajit Sahu
 
Intro2 Cuda Moayad
Intro2 Cuda MoayadIntro2 Cuda Moayad
Intro2 Cuda MoayadMoayadhn
 
2011.02.18 marco parenzan - modelli di programmazione per le gpu
2011.02.18   marco parenzan - modelli di programmazione per le gpu2011.02.18   marco parenzan - modelli di programmazione per le gpu
2011.02.18 marco parenzan - modelli di programmazione per le gpuMarco Parenzan
 
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : Notes
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : NotesCUDA First Programs: Computer Architecture CSE448 : UAA Alaska : Notes
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : NotesSubhajit Sahu
 
CUDA Tutorial 01 : Say Hello to CUDA : Notes
CUDA Tutorial 01 : Say Hello to CUDA : NotesCUDA Tutorial 01 : Say Hello to CUDA : Notes
CUDA Tutorial 01 : Say Hello to CUDA : NotesSubhajit Sahu
 
Using Docker for GPU Accelerated Applications
Using Docker for GPU Accelerated ApplicationsUsing Docker for GPU Accelerated Applications
Using Docker for GPU Accelerated ApplicationsNVIDIA
 
CUDA by Example : Thread Cooperation : Notes
CUDA by Example : Thread Cooperation : NotesCUDA by Example : Thread Cooperation : Notes
CUDA by Example : Thread Cooperation : NotesSubhajit Sahu
 
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...Docker, Inc.
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computingArka Ghosh
 
C++20 the small things - Timur Doumler
C++20 the small things - Timur DoumlerC++20 the small things - Timur Doumler
C++20 the small things - Timur Doumlercorehard_by
 
C++ AMP 실천 및 적용 전략
C++ AMP 실천 및 적용 전략 C++ AMP 실천 및 적용 전략
C++ AMP 실천 및 적용 전략 명신 김
 

Similaire à Introduction to CUDA C: NVIDIA : Notes (20)

Tema3_Introduction_to_CUDA_C.pdf
Tema3_Introduction_to_CUDA_C.pdfTema3_Introduction_to_CUDA_C.pdf
Tema3_Introduction_to_CUDA_C.pdf
 
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
 
introduction to CUDA_C.pptx it is widely used
introduction to CUDA_C.pptx it is widely usedintroduction to CUDA_C.pptx it is widely used
introduction to CUDA_C.pptx it is widely used
 
CUDA by Example : Parallel Programming in CUDA C : Notes
CUDA by Example : Parallel Programming in CUDA C : NotesCUDA by Example : Parallel Programming in CUDA C : Notes
CUDA by Example : Parallel Programming in CUDA C : Notes
 
Intro2 Cuda Moayad
Intro2 Cuda MoayadIntro2 Cuda Moayad
Intro2 Cuda Moayad
 
2011.02.18 marco parenzan - modelli di programmazione per le gpu
2011.02.18   marco parenzan - modelli di programmazione per le gpu2011.02.18   marco parenzan - modelli di programmazione per le gpu
2011.02.18 marco parenzan - modelli di programmazione per le gpu
 
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : Notes
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : NotesCUDA First Programs: Computer Architecture CSE448 : UAA Alaska : Notes
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : Notes
 
Cuda 2011
Cuda 2011Cuda 2011
Cuda 2011
 
CUDA Tutorial 01 : Say Hello to CUDA : Notes
CUDA Tutorial 01 : Say Hello to CUDA : NotesCUDA Tutorial 01 : Say Hello to CUDA : Notes
CUDA Tutorial 01 : Say Hello to CUDA : Notes
 
Deep Learning Edge
Deep Learning Edge Deep Learning Edge
Deep Learning Edge
 
Using Docker for GPU Accelerated Applications
Using Docker for GPU Accelerated ApplicationsUsing Docker for GPU Accelerated Applications
Using Docker for GPU Accelerated Applications
 
CUDA by Example : Thread Cooperation : Notes
CUDA by Example : Thread Cooperation : NotesCUDA by Example : Thread Cooperation : Notes
CUDA by Example : Thread Cooperation : Notes
 
Cuda intro
Cuda introCuda intro
Cuda intro
 
Qe Reference
Qe ReferenceQe Reference
Qe Reference
 
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
C++20 the small things - Timur Doumler
C++20 the small things - Timur DoumlerC++20 the small things - Timur Doumler
C++20 the small things - Timur Doumler
 
C++ AMP 실천 및 적용 전략
C++ AMP 실천 및 적용 전략 C++ AMP 실천 및 적용 전략
C++ AMP 실천 및 적용 전략
 

Plus de Subhajit Sahu

DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESDyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
 
Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Subhajit Sahu
 
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESA Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
 
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESScalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
 
Application Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESApplication Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
 
Community Detection on the GPU : NOTES
Community Detection on the GPU : NOTESCommunity Detection on the GPU : NOTES
Community Detection on the GPU : NOTESSubhajit Sahu
 
Survey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSurvey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSubhajit Sahu
 
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERDynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
 
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
 
Can you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTESCan you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTESSubhajit Sahu
 
HITS algorithm : NOTES
HITS algorithm : NOTESHITS algorithm : NOTES
HITS algorithm : NOTESSubhajit Sahu
 
Basic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTESBasic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTESSubhajit Sahu
 
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDESDynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDESSubhajit Sahu
 
Are Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTESAre Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTESSubhajit Sahu
 
Taxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTESTaxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTESSubhajit Sahu
 
A Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTESA Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTESSubhajit Sahu
 
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...Subhajit Sahu
 
Income Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTESIncome Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTESSubhajit Sahu
 
Youngistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTESYoungistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTESSubhajit Sahu
 

Plus de Subhajit Sahu (20)

DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESDyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
 
Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)
 
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESA Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
 
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESScalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
 
Application Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESApplication Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTES
 
Community Detection on the GPU : NOTES
Community Detection on the GPU : NOTESCommunity Detection on the GPU : NOTES
Community Detection on the GPU : NOTES
 
Survey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSurvey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTES
 
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERDynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
 
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTES
 
Can you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTESCan you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTES
 
HITS algorithm : NOTES
HITS algorithm : NOTESHITS algorithm : NOTES
HITS algorithm : NOTES
 
Basic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTESBasic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTES
 
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDESDynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
 
Are Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTESAre Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTES
 
Taxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTESTaxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTES
 
A Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTESA Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTES
 
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
 
Income Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTESIncome Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTES
 
Youngistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTESYoungistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTES
 

Dernier

Top Rated Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated  Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Top Rated  Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Call Girls in Nagpur High Profile
 
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...amitlee9823
 
Top Rated Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated  Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Top Rated  Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Call Girls in Nagpur High Profile
 
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...amitlee9823
 
Shikrapur Call Girls Most Awaited Fun 6297143586 High Profiles young Beautie...
Shikrapur Call Girls Most Awaited Fun  6297143586 High Profiles young Beautie...Shikrapur Call Girls Most Awaited Fun  6297143586 High Profiles young Beautie...
Shikrapur Call Girls Most Awaited Fun 6297143586 High Profiles young Beautie...tanu pandey
 
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedDelhi Call girls
 
Book Sex Workers Available Pune Call Girls Yerwada 6297143586 Call Hot India...
Book Sex Workers Available Pune Call Girls Yerwada  6297143586 Call Hot India...Book Sex Workers Available Pune Call Girls Yerwada  6297143586 Call Hot India...
Book Sex Workers Available Pune Call Girls Yerwada 6297143586 Call Hot India...Call Girls in Nagpur High Profile
 
Abortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in Dammam
Abortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in DammamAbortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in Dammam
Abortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in Dammamahmedjiabur940
 
Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7
Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7
Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7shivanni mehta
 
HLH PPT.ppt very important topic to discuss
HLH PPT.ppt very important topic to discussHLH PPT.ppt very important topic to discuss
HLH PPT.ppt very important topic to discussDrMSajidNoor
 
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...amitlee9823
 
一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制
一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制
一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制uodye
 
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...motiram463
 
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...drmarathore
 
Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...
Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...
Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...Naicy mandal
 
Call Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night StandCall Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)amitlee9823
 
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)amitlee9823
 

Dernier (20)

CHEAP Call Girls in Mayapuri (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Mayapuri  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Mayapuri  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Mayapuri (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Top Rated Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated  Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Top Rated  Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls Chakan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
 
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
 
Top Rated Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated  Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Top Rated  Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls Katraj ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
 
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
Shikrapur Call Girls Most Awaited Fun 6297143586 High Profiles young Beautie...
Shikrapur Call Girls Most Awaited Fun  6297143586 High Profiles young Beautie...Shikrapur Call Girls Most Awaited Fun  6297143586 High Profiles young Beautie...
Shikrapur Call Girls Most Awaited Fun 6297143586 High Profiles young Beautie...
 
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
 
Book Sex Workers Available Pune Call Girls Yerwada 6297143586 Call Hot India...
Book Sex Workers Available Pune Call Girls Yerwada  6297143586 Call Hot India...Book Sex Workers Available Pune Call Girls Yerwada  6297143586 Call Hot India...
Book Sex Workers Available Pune Call Girls Yerwada 6297143586 Call Hot India...
 
Abortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in Dammam
Abortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in DammamAbortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in Dammam
Abortion Pill for sale in Riyadh ((+918761049707) Get Cytotec in Dammam
 
Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7
Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7
Escorts Service Daryaganj - 9899900591 College Girls & Models 24/7
 
HLH PPT.ppt very important topic to discuss
HLH PPT.ppt very important topic to discussHLH PPT.ppt very important topic to discuss
HLH PPT.ppt very important topic to discuss
 
CHEAP Call Girls in Hauz Quazi (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Hauz Quazi  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Hauz Quazi  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Hauz Quazi (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...
 
一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制
一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制
一比一原版(Otago毕业证书)奥塔哥理工学院毕业证成绩单学位证靠谱定制
 
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
 
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
 
Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...
Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...
Makarba ( Call Girls ) Ahmedabad ✔ 6297143586 ✔ Hot Model With Sexy Bhabi Rea...
 
Call Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night StandCall Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In RT Nagar ☎ 7737669865 🥵 Book Your One night Stand
 
Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Arekere ☎ 7737669865☎ Book Your One night Stand (Bangalore)
 
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
 

Introduction to CUDA C: NVIDIA : Notes

  • 2. What is CUDA?  CUDA Architecture — Expose general-purpose GPU computing as first-class capability — Retain traditional DirectX/OpenGL graphics performance  CUDA C — Based on industry-standard C — A handful of language extensions to allow heterogeneous programs — Straightforward APIs to manage devices, memory, etc.  This talk will introduce you to CUDA C
  • 3. Introduction to CUDA C  What will you learn today? — Start from ―Hello, World!‖ — Write and launch CUDA C kernels — Manage GPU memory — Run parallel kernels in CUDA C — Parallel communication and synchronization — Race conditions and atomic operations
  • 4. CUDA C Prerequisites  You (probably) need experience with C or C++  You do not need any GPU experience  You do not need any graphics experience  You do not need any parallel programming experience
  • 5. CUDA C: The Basics Host Note: Figure Not to Scale  Terminology  Host – The CPU and its memory (host memory)  Device – The GPU and its memory (device memory) Device
  • 6. Hello, World! int main( void ) { printf( "Hello, World!n" ); return 0; }  This basic program is just standard C that runs on the host  NVIDIA’s compiler (nvcc) will not complain about CUDA programs with no device code  At its simplest, CUDA C is just C!
  • 7. Hello, World! with Device Code __global__ void kernel( void ) { } int main( void ) { kernel<<<1,1>>>(); printf( "Hello, World!n" ); return 0; }  Two notable additions to the original ―Hello, World!‖
  • 8. Hello, World! with Device Code __global__ void kernel( void ) { }  CUDA C keyword __global__ indicates that a function — Runs on the device — Called from host code  nvcc splits source file into host and device components — NVIDIA’s compiler handles device functions like kernel() — Standard host compiler handles host functions like main()  gcc  Microsoft Visual C
  • 9. Hello, World! with Device Code int main( void ) { kernel<<< 1, 1 >>>(); printf( "Hello, World!n" ); return 0; }  Triple angle brackets mark a call from host code to device code — Sometimes called a ―kernel launch‖ — We’ll discuss the parameters inside the angle brackets later  This is all that’s required to execute a function on the GPU!  The function kernel() does nothing, so this is fairly anticlimactic…
  • 10. A More Complex Example  A simple kernel to add two integers: __global__ void add( int *a, int *b, int *c ) { *c = *a + *b; }  As before, __global__ is a CUDA C keyword meaning — add() will execute on the device — add() will be called from the host
  • 11. A More Complex Example  Notice that we use pointers for our variables: __global__ void add( int *a, int *b, int *c ) { *c = *a + *b; }  add() runs on the device…so a, b, and c must point to device memory  How do we allocate memory on the GPU?
  • 12. Memory Management  Host and device memory are distinct entities — Device pointers point to GPU memory  May be passed to and from host code  May not be dereferenced from host code — Host pointers point to CPU memory  May be passed to and from device code  May not be dereferenced from device code  Basic CUDA API for dealing with device memory — cudaMalloc(), cudaFree(), cudaMemcpy() — Similar to their C equivalents, malloc(), free(), memcpy()
  • 13. A More Complex Example: add()  Using our add()kernel: __global__ void add( int *a, int *b, int *c ) { *c = *a + *b; }  Let’s take a look at main()…
  • 14. A More Complex Example: main() int main( void ) { int a, b, c; // host copies of a, b, c int *dev_a, *dev_b, *dev_c; // device copies of a, b, c int size = sizeof( int ); // we need space for an integer // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, size ); a = 2; b = 7;
  • 15. A More Complex Example: main() (cont) // copy inputs to device cudaMemcpy( dev_a, &a, size, cudaMemcpyHostToDevice ); cudaMemcpy( dev_b, &b, size, cudaMemcpyHostToDevice ); // launch add() kernel on GPU, passing parameters add<<< 1, 1 >>>( dev_a, dev_b, dev_c ); // copy device result back to host copy of c cudaMemcpy( &c, dev_c, size, cudaMemcpyDeviceToHost ); cudaFree( dev_a ); cudaFree( dev_b ); cudaFree( dev_c ); return 0; }
  • 16. Parallel Programming in CUDA C  But wait…GPU computing is about massive parallelism  So how do we run code in parallel on the device?  Solution lies in the parameters between the triple angle brackets: add<<< 1, 1 >>>( dev_a, dev_b, dev_c ); add<<< N, 1 >>>( dev_a, dev_b, dev_c );  Instead of executing add() once, add() executed N times in parallel
  • 17. Parallel Programming in CUDA C  With add() running in parallel…let’s do vector addition  Terminology: Each parallel invocation of add() referred to as a block  Kernel can refer to its block’s index with the variable blockIdx.x  Each block adds a value from a[] and b[], storing the result in c[]: __global__ void add( int *a, int *b, int *c ) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; }  By using blockIdx.x to index arrays, each block handles different indices
  • 18. Parallel Programming in CUDA C Block 1 c[1] = a[1] + b[1];  We write this code: __global__ void add( int *a, int *b, int *c ) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; }  This is what runs in parallel on the device: Block 0 c[0] = a[0] + b[0]; Block 2 c[2] = a[2] + b[2]; Block 3 c[3] = a[3] + b[3];
  • 19. Parallel Addition: add()  Using our newly parallelized add()kernel: __global__ void add( int *a, int *b, int *c ) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; }  Let’s take a look at main()…
  • 20. Parallel Addition: main() #define N 512 int main( void ) { int *a, *b, *c; // host copies of a, b, c int *dev_a, *dev_b, *dev_c; // device copies of a, b, c int size = N * sizeof( int ); // we need space for 512 integers // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, size ); a = (int*)malloc( size ); b = (int*)malloc( size ); c = (int*)malloc( size ); random_ints( a, N ); random_ints( b, N );
  • 21. Parallel Addition: main() (cont) // copy inputs to device cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice ); cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice ); // launch add() kernel with N parallel blocks add<<< N, 1 >>>( dev_a, dev_b, dev_c ); // copy device result back to host copy of c cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost ); free( a ); free( b ); free( c ); cudaFree( dev_a ); cudaFree( dev_b ); cudaFree( dev_c ); return 0; }
  • 22. Review  Difference between ―host‖ and ―device‖ — Host = CPU — Device = GPU  Using __global__ to declare a function as device code — Runs on device — Called from host  Passing parameters from host code to a device function
  • 23. Review (cont)  Basic device memory management — cudaMalloc() — cudaMemcpy() — cudaFree()  Launching parallel kernels — Launch N copies of add() with: add<<< N, 1 >>>(); — Used blockIdx.x to access block’s index
  • 24. Threads  Terminology: A block can be split into parallel threads  Let’s change vector addition to use parallel threads instead of parallel blocks: __global__ void add( int *a, int *b, int *c ) { c[ ] = a[ ] + b[ ]; }  We use threadIdx.x instead of blockIdx.x in add()  main() will require one change as well… threadIdx.x threadIdx.x threadIdx.xblockIdx.x blockIdx.x blockIdx.x
  • 25. Parallel Addition (Threads): main() #define N 512 int main( void ) { int *a, *b, *c; //host copies of a, b, c int *dev_a, *dev_b, *dev_c; //device copies of a, b, c int size = N * sizeof( int ); //we need space for 512 integers // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, size ); a = (int*)malloc( size ); b = (int*)malloc( size ); c = (int*)malloc( size ); random_ints( a, N ); random_ints( b, N );
  • 26. Parallel Addition (Threads): main() (cont) // copy inputs to device cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice ); cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice ); // launch add() kernel with N add<<< >>>( dev_a, dev_b, dev_c ); // copy device result back to host copy of c cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost ); free( a ); free( b ); free( c ); cudaFree( dev_a ); cudaFree( dev_b ); cudaFree( dev_c ); return 0; } threads 1, N blocks N, 1
  • 27. Using Threads And Blocks  We’ve seen parallel vector addition using — Many blocks with 1 thread apiece — 1 block with many threads  Let’s adapt vector addition to use lots of both blocks and threads  After using threads and blocks together, we’ll talk about why threads  First let’s discuss data indexing…
  • 28. Indexing Arrays With Threads And Blocks  No longer as simple as just using threadIdx.x or blockIdx.x as indices  To index array with 1 thread per entry (using 8 threads/block)  If we have M threads/block, a unique array index for each entry given by int index = threadIdx.x + blockIdx.x * M; int index = x + y * width; blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3 threadIdx.x 0 1 2 3 4 5 6 7 threadIdx.x 0 1 2 3 4 5 6 7 threadIdx.x 0 1 2 3 4 5 6 7 threadIdx.x 0 1 2 3 4 5 6 7
  • 29. Indexing Arrays: Example  In this example, the red entry would have an index of 21: int index = threadIdx.x + blockIdx.x * M; = 5 + 2 * 8; = 21; blockIdx.x = 2 M = 8 threads/block 0 178 16 18 19 20 2121 3 4 5 6 7 109 11 12 13 14 15
  • 30. Addition with Threads and Blocks  The blockDim.x is a built-in variable for threads per block: int index= threadIdx.x + blockIdx.x * blockDim.x;  A combined version of our vector addition kernel to use blocks and threads: __global__ void add( int *a, int *b, int *c ) { int index = threadIdx.x + blockIdx.x * blockDim.x; c[index] = a[index] + b[index]; }  So what changes in main() when we use both blocks and threads?
  • 31. Parallel Addition (Blocks/Threads): main() #define N (2048*2048) #define THREADS_PER_BLOCK 512 int main( void ) { int *a, *b, *c; // host copies of a, b, c int *dev_a, *dev_b, *dev_c; // device copies of a, b, c int size = N * sizeof( int ); // we need space for N integers // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, size ); a = (int*)malloc( size ); b = (int*)malloc( size ); c = (int*)malloc( size ); random_ints( a, N ); random_ints( b, N );
  • 32. Parallel Addition (Blocks/Threads): main() // copy inputs to device cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice ); cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice ); // launch add() kernel with blocks and threads add<<< N/THREADS_PER_BLOCK, THREADS_PER_BLOCK >>>( dev_a, dev_b, dev_c ); // copy device result back to host copy of c cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost ); free( a ); free( b ); free( c ); cudaFree( dev_a ); cudaFree( dev_b ); cudaFree( dev_c ); return 0; }
  • 33. Why Bother With Threads?  Threads seem unnecessary — Added a level of abstraction and complexity — What did we gain?  Unlike parallel blocks, parallel threads have mechanisms to — Communicate — Synchronize  Let’s see how…
  • 34. Dot Product  Unlike vector addition, dot product is a reduction from vectors to a scalar c = a ∙ b c = (a0, a1, a2, a3) ∙ (b0, b1, b2, b3) c = a0 b0 + a1 b1 + a2 b2 + a3 b3 a0 a1 a2 a3 b0 b1 b2 b3 * * * * + a b c
  • 35. Dot Product  Parallel threads have no problem computing the pairwise products:  So we can start a dot product CUDA kernel by doing just that: __global__ void dot( int *a, int *b, int *c ) { // Each thread computes a pairwise product int temp = a[threadIdx.x] * b[threadIdx.x]; a0 a1 a2 a3 b0 b1 b2 b3 * * * * + a b
  • 36. Dot Product  But we need to share data between threads to compute the final sum: __global__ void dot( int *a, int *b, int *c ) { // Each thread computes a pairwise product int temp = a[threadIdx.x] * b[threadIdx.x]; // Can’t compute the final sum // Each thread’s copy of ‘temp’ is private } a0 a1 a2 a3 b0 b1 b2 b3 * * * * + a b
  • 37. Sharing Data Between Threads  Terminology: A block of threads shares memory called…  Extremely fast, on-chip memory (user-managed cache)  Declared with the __shared__ CUDA keyword  Not visible to threads in other blocks running in parallel shared memory Shared Memory Threads Block 0 Shared Memory Threads Block 1 Shared Memory Threads Block 2 …
  • 38. Parallel Dot Product: dot()  We perform parallel multiplication, serial addition: #define N 512 __global__ void dot( int *a, int *b, int *c ) { // Shared memory for results of multiplication __shared__ int temp[N]; temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x]; // Thread 0 sums the pairwise products if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < N; i++ ) sum += temp[i]; *c = sum; } }
  • 39. Parallel Dot Product Recap  We perform parallel, pairwise multiplications  Shared memory stores each thread’s result  We sum these pairwise products from a single thread  Sounds good…but we’ve made a huge mistake
  • 40. Faulty Dot Product Exposed!  Step 1: In parallel, each thread writes a pairwise product  Step 2: Thread 0 reads and sums the products  But there’s an assumption hidden in Step 1… __shared__ int temp __shared__ int temp In parallel
  • 41. Read-Before-Write Hazard  Suppose thread 0 finishes its write in step 1  Then thread 0 reads index 12 in step 2  Before thread 12 writes to index 12 in step 1? This read returns garbage!
  • 42. Synchronization  We need threads to wait between the sections of dot(): __global__ void dot( int *a, int *b, int *c ) { __shared__ int temp[N]; temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x]; // * NEED THREADS TO SYNCHRONIZE HERE * // No thread can advance until all threads // have reached this point in the code // Thread 0 sums the pairwise products if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < N; i++ ) sum += temp[i]; *c = sum; } }
  • 43. __syncthreads()  We can synchronize threads with the function __syncthreads()  Threads in the block wait until all threads have hit the __syncthreads()  Threads are only synchronized within a block __syncthreads() __syncthreads() __syncthreads() __syncthreads() __syncthreads() Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 …
  • 44. Parallel Dot Product: dot() __global__ void dot( int *a, int *b, int *c ) { __shared__ int temp[N]; temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x]; __syncthreads(); if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < N; i++ ) sum += temp[i]; *c = sum; } }  With a properly synchronized dot() routine, let’s look at main()
  • 45. Parallel Dot Product: main() #define N 512 int main( void ) { int *a, *b, *c; // copies of a, b, c int *dev_a, *dev_b, *dev_c; // device copies of a, b, c int size = N * sizeof( int ); // we need space for 512 integers // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, sizeof( int ) ); a = (int *)malloc( size ); b = (int *)malloc( size ); c = (int *)malloc( sizeof( int ) ); random_ints( a, N ); random_ints( b, N );
  • 46. Parallel Dot Product: main() // copy inputs to device cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice ); cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice ); // launch dot() kernel with 1 block and N threads dot<<< 1, N >>>( dev_a, dev_b, dev_c ); // copy device result back to host copy of c cudaMemcpy( c, dev_c, sizeof( int ) , cudaMemcpyDeviceToHost ); free( a ); free( b ); free( c ); cudaFree( dev_a ); cudaFree( dev_b ); cudaFree( dev_c ); return 0; }
  • 47. Review  Launching kernels with parallel threads — Launch add() with N threads: add<<< 1, N >>>(); — Used threadIdx.x to access thread’s index  Using both blocks and threads — Used (threadIdx.x + blockIdx.x * blockDim.x) to index input/output — N/THREADS_PER_BLOCK blocks and THREADS_PER_BLOCK threads gave us N threads total
  • 48. Review (cont)  Using __shared__ to declare memory as shared memory — Data shared among threads in a block — Not visible to threads in other parallel blocks  Using __syncthreads() as a barrier — No thread executes instructions after __syncthreads() until all threads have reached the __syncthreads() — Needs to be used to prevent data hazards
  • 49. Multiblock Dot Product  Recall our dot product launch: // launch dot() kernel with 1 block and N threads dot<<< 1, N >>>( dev_a, dev_b, dev_c );  Launching with one block will not utilize much of the GPU  Let’s write a multiblock version of dot product
  • 50. Multiblock Dot Product: Algorithm  Each block computes a sum of its pairwise products like before: a0 a1 a2 a3 b0 b1 b2 b3 * * * * + a b … … sum Block 0 a512 a513 a514 a515 b512 b513 b514 b515 * * * * + a b … … sum Block 1
  • 51. Multiblock Dot Product: Algorithm  And then contributes its sum to the final result: a0 a1 a2 a3 b0 b1 b2 b3 * * * * + a b … … sum Block 0 a512 a513 a514 a515 b512 b513 b514 b515 * * * * + a b … … sum Block 1 c
  • 52. Multiblock Dot Product: dot() #define N (2048*2048) #define THREADS_PER_BLOCK 512 __global__ void dot( int *a, int *b, int *c ) { __shared__ int temp[THREADS_PER_BLOCK]; int index = threadIdx.x + blockIdx.x * blockDim.x; temp[threadIdx.x] = a[index] * b[index]; __syncthreads(); if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < THREADS_PER_BLOCK; i++ ) sum += temp[i]; } }  But we have a race condition…  We can fix it with one of CUDA’s atomic operations *c += sum;atomicAdd( c , sum );
  • 53. Race Conditions  Thread 0, Block 1 — Read value at address c — Add sum to value — Write result to address c  Terminology: A race condition occurs when program behavior depends upon relative timing of two (or more) event sequences  What actually takes place to execute the line in question: *c += sum; — Read value at address c — Add sum to value — Write result to address c  What if two threads are trying to do this at the same time?  Thread 0, Block 0 — Read value at address c — Add sum to value — Write result to address c Terminology: Read-Modify-Write
  • 54. Global Memory Contention 0c 3 Block 0 sum = 3 Block 1 sum = 4 Reads 0 0 Computes 0+3 0+3 = 3 3 Writes 3 Reads 3 3 Computes 3+4 3+4 = 7 7 Writes 7 0 3 73 Read-Modify-Write Read-Modify-Write *c += sum
  • 55. Global Memory Contention 0c 0 Block 0 sum = 3 Block 1 sum = 4 Reads 0 0 Computes 0+3 0+3 = 3 3 Writes 3 Reads 0 0 Computes 0+4 0+4 = 4 4 Writes 4 0 0 43 Read-Modify-Write Read-Modify-Write *c += sum
  • 56. Atomic Operations  Terminology: Read-modify-write uninterruptible when atomic  Many atomic operations on memory available with CUDA C  Predictable result when simultaneous access to memory required  We need to atomically add sum to c in our multiblock dot product  atomicAdd()  atomicSub()  atomicMin()  atomicMax()  atomicInc()  atomicDec()  atomicExch()  atomicCAS()
  • 57. Multiblock Dot Product: dot() __global__ void dot( int *a, int *b, int *c ) { __shared__ int temp[THREADS_PER_BLOCK]; int index = threadIdx.x + blockIdx.x * blockDim.x; temp[threadIdx.x] = a[index] * b[index]; __syncthreads(); if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < THREADS_PER_BLOCK; i++ ) sum += temp[i]; atomicAdd( c , sum ); } }  Now let’s fix up main() to handle a multiblock dot product
  • 58. Parallel Dot Product: main() #define N (2048*2048) #define THREADS_PER_BLOCK 512 int main( void ) { int *a, *b, *c; // host copies of a, b, c int *dev_a, *dev_b, *dev_c; // device copies of a, b, c int size = N * sizeof( int ); // we need space for N ints // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, sizeof( int ) ); a = (int *)malloc( size ); b = (int *)malloc( size ); c = (int *)malloc( sizeof( int ) ); random_ints( a, N ); random_ints( b, N );
  • 59. Parallel Dot Product: main() // copy inputs to device cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice ); cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice ); // launch dot() kernel dot<<< N/THREADS_PER_BLOCK, THREADS_PER_BLOCK >>>( dev_a, dev_b, dev_c ); // copy device result back to host copy of c cudaMemcpy( c, dev_c, sizeof( int ) , cudaMemcpyDeviceToHost ); free( a ); free( b ); free( c ); cudaFree( dev_a ); cudaFree( dev_b ); cudaFree( dev_c ); return 0; }
  • 60. Review  Race conditions — Behavior depends upon relative timing of multiple event sequences — Can occur when an implied read-modify-write is interruptible  Atomic operations — CUDA provides read-modify-write operations guaranteed to be atomic — Atomics ensure correct results when multiple threads modify memory
  • 61. To Learn More CUDA C  Check out CUDA by Example — Parallel Programming in CUDA C — Thread Cooperation — Constant Memory and Events — Texture Memory — Graphics Interoperability — Atomics — Streams — CUDA C on Multiple GPUs — Other CUDA Resources  http://developer.nvidia.com/object/cuda-by-example.html
  • 62. Questions  First my questions  Now your questions…