| chapter
				 int64 2 21 | exercise
				 stringlengths 1 6 | type
				 stringclasses 2
				values | question
				 stringlengths 31 1.33k | choices
				 listlengths 4 4 ⌀ | answer
				 stringlengths 1 63 | explanation
				 stringlengths 35 287 | topic_tags
				 listlengths 2 5 | 
|---|---|---|---|---|---|---|---|
| 2 | 
	1 | 
	mcq | 
	If we want to use each thread in a grid to calculate one output element of a vector addition, what is the expression for mapping the thread/block indices to the data index i? | 
	[
  "A. i = threadIdx.x + threadIdx.y;",
  "B. i = blockIdx.x + threadIdx.x;",
  "C. i = blockIdx.x * blockDim.x + threadIdx.x;",
  "D. i = blockIdx.x * threadIdx.x;"
] | 
	C | 
	You need both the block offset (blockIdx.x * blockDim.x) and the thread offset within the block (threadIdx.x). | 
	[
  "CUDA",
  "indexing",
  "grid",
  "blockDim"
] | 
| 2 | 
	2 | 
	mcq | 
	Each thread calculates two adjacent elements of a vector addition. What is the expression for the data index i of the first element processed by a thread? | 
	[
  "A. i = blockIdx.x * blockDim.x + threadIdx.x * 2;",
  "B. i = blockIdx.x * threadIdx.x * 2;",
  "C. i = (blockIdx.x * blockDim.x + threadIdx.x) * 2;",
  "D. i = blockIdx.x * blockDim.x * 2 + threadIdx.x;"
] | 
	C | 
	This doubles the logical thread index so each thread starts at an even index (0,2,4,...) while remaining contiguous across blocks. | 
	[
  "CUDA",
  "indexing",
  "coarsening"
] | 
| 2 | 
	3 | 
	mcq | 
	Each thread calculates two elements. A block processes 2*blockDim.x consecutive elements in two sections: first section (each thread does one element), then second section (each thread does one element). What is the expression for the first element index i for a thread? | 
	[
  "A. i = blockIdx.x * blockDim.x + threadIdx.x + 2;",
  "B. i = blockIdx.x * threadIdx.x * 2;",
  "C. i = (blockIdx.x * blockDim.x + threadIdx.x) * 2;",
  "D. i = blockIdx.x * blockDim.x * 2 + threadIdx.x;"
] | 
	D | 
	The first section starts at the block's base offset of 2*blockDim.x. Each thread handles i and then i + blockDim.x in the second section. | 
	[
  "CUDA",
  "indexing",
  "grid"
] | 
| 2 | 
	4 | 
	mcq | 
	Vector addition with length 8000, 1 output element per thread, block size 1024. Using the minimum number of blocks to cover all elements, how many threads are in the grid? | 
	[
  "A. 8000",
  "B. 8196",
  "C. 8192",
  "D. 8200"
] | 
	C | 
	ceil(8000/1024) = 8 blocks, each with 1024 threads -> 8*1024 = 8192 threads. | 
	[
  "CUDA",
  "launch_config"
] | 
| 2 | 
	5 | 
	mcq | 
	Allocate an array of v integers in device global memory with cudaMalloc. What is the correct expression for the second argument (size in bytes)? | 
	[
  "A. n",
  "B. v",
  "C. n * sizeof(int)",
  "D. v * sizeof(int)"
] | 
	D | 
	cudaMalloc takes the size in bytes; for v integers that is v * sizeof(int). | 
	[
  "CUDA",
  "cudaMalloc",
  "API"
] | 
| 2 | 
	6 | 
	mcq | 
	Allocate an array of n floats and have pointer A_d point to it. What is the appropriate first argument to cudaMalloc? | 
	[
  "A. n",
  "B. (void*) A_d",
  "C. *A_d",
  "D. (void**) &A_d"
] | 
	D | 
	cudaMalloc's first parameter is a void** to receive the device pointer (i.e., the address of the pointer). | 
	[
  "CUDA",
  "cudaMalloc",
  "API"
] | 
| 2 | 
	7 | 
	mcq | 
	Copy 3000 bytes from host array A_h to device array A_d. Which API call is correct? | 
	[
  "A. cudaMemcpy(3000, A_h, A_d, cudaMemcpyHostToDevice);",
  "B. cudaMemcpy(A_h, A_d, 3000, cudaMemcpyDeviceToHost);",
  "C. cudaMemcpy(A_d, A_h, 3000, cudaMemcpyHostToDevice);",
  "D. cudaMemcpy(3000, A_d, A_h, cudaMemcpyHostToDevice);"
] | 
	C | 
	Syntax is cudaMemcpy(dst, src, sizeBytes, kind). Here we copy from host to device. | 
	[
  "CUDA",
  "cudaMemcpy",
  "API"
] | 
| 2 | 
	8 | 
	mcq | 
	How to declare variable err to receive return values of CUDA API calls? | 
	[
  "A. int err;",
  "B. cudaError err;",
  "C. cudaError_t err;",
  "D. cudaSuccess_t err;"
] | 
	C | 
	CUDA API error return type is cudaError_t. | 
	[
  "CUDA",
  "error_handling",
  "API"
] | 
| 2 | 
	9a | 
	short_answer | 
	Given the CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int N) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < N) {
        b[i] = 2.7f * a[i] - 4.3f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 200000;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);
}
```
(a) What is the number of threads **per block**? | null | 
	128 | 
	Given by the kernel launch <<<..., 128>>>. | 
	[
  "CUDA",
  "launch_config"
] | 
| 2 | 
	9b | 
	short_answer | 
	Given the CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int N) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < N) {
        b[i] = 2.7f * a[i] - 4.3f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 200000;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);
}
```
(b) What is the **number of threads in the grid**? | null | 
	200064 | 
	Blocks = ceil(200000/128) = (200000 + 127) // 128 = 1563; threads = 1563 * 128 = 200064. | 
	[
  "CUDA",
  "launch_config",
  "arithmetic"
] | 
| 2 | 
	9c | 
	short_answer | 
	Given the CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int N) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < N) {
        b[i] = 2.7f * a[i] - 4.3f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 200000;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);
}
```
(c) What is the **number of blocks in the grid**? | null | 
	1563 | 
	Computed as (N + 128 - 1) / 128 with N = 200000. | 
	[
  "CUDA",
  "launch_config"
] | 
| 2 | 
	9d | 
	short_answer | 
	Given the CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int N) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < N) {
        b[i] = 2.7f * a[i] - 4.3f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 200000;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);
}
```
(d) How many threads **execute the index computation line** `unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;`? | null | 
	200064 | 
	All launched threads execute the index computation line. | 
	[
  "CUDA",
  "control_flow"
] | 
| 2 | 
	9e | 
	short_answer | 
	Given the CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int N) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < N) {
        b[i] = 2.7f * a[i] - 4.3f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 200000;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);
}
```
(e) How many threads **execute the assignment inside the `if (i < N)`** - i.e., `b[i] = 2.7f * a[i] - 4.3f;`? | null | 
	200000 | 
	Only threads with i < N execute the body; extra 64 threads fail the predicate. | 
	[
  "CUDA",
  "control_flow",
  "bounds_check"
] | 
| 3 | 
	3a | 
	short_answer | 
	Given the following CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {
    unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;
    unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;
    if (row < M && col < N) {
        b[row*N + col] = a[row*N + col]/2.1f + 4.8f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int M = 150;
    unsigned int N = 300;
    dim3 bd(16, 32);
    dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);
    foo_kernel<<<gd, bd>>>(a_d, b_d, M, N);
}
```
(a) What is the number of threads per block? | null | 
	512 | 
	bd = (16,32) ⇒ threadsPerBlock = 16x32 = 512. | 
	[
  "CUDA",
  "launch_config",
  "threads_per_block"
] | 
| 3 | 
	3b | 
	short_answer | 
	Given the following CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {
    unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;
    unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;
    if (row < M && col < N) {
        b[row*N + col] = a[row*N + col]/2.1f + 4.8f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int M = 150;
    unsigned int N = 300;
    dim3 bd(16, 32);
    dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);
    foo_kernel<<<gd, bd>>>(a_d, b_d, M, N);
}
```
(b) What is the number of threads in the grid? | null | 
	48640 | 
	gd = (19,5) ⇒ blocks = 19x5 = 95. Threads = 95x512 = 48,640. | 
	[
  "CUDA",
  "launch_config",
  "thread_count",
  "2D_grid"
] | 
| 3 | 
	3c | 
	short_answer | 
	Given the following CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {
    unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;
    unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;
    if (row < M && col < N) {
        b[row*N + col] = a[row*N + col]/2.1f + 4.8f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int M = 150;
    unsigned int N = 300;
    dim3 bd(16, 32);
    dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);
    foo_kernel<<<gd, bd>>>(a_d, b_d, M, N);
}
```
(c) What is the number of blocks in the grid? | null | 
	95 | 
	Blocks = gd.x x gd.y = 19 x 5 = 95. | 
	[
  "CUDA",
  "grid_dim",
  "launch_config"
] | 
| 3 | 
	3d | 
	short_answer | 
	Given the following CUDA code:
```c
__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {
    unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;
    unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;
    if (row < M && col < N) {
        b[row*N + col] = a[row*N + col]/2.1f + 4.8f;
    }
}
void foo(float* a_d, float* b_d) {
    unsigned int M = 150;
    unsigned int N = 300;
    dim3 bd(16, 32);
    dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);
    foo_kernel<<<gd, bd>>>(a_d, b_d, M, N);
}
```
(d) How many threads execute the assignment `b[row*N + col] = a[row*N + col]/2.1f + 4.8f;`? | null | 
	45000 | 
	Only threads with (row < M && col < N) execute it. Count = MxN = 150x300 = 45,000. | 
	[
  "CUDA",
  "control_flow",
  "bounds_check"
] | 
| 3 | 
	4a | 
	short_answer | 
	A 2D matrix has width=400 and height=500 and is stored as a 1D array in row-major order. What is the linear index of the element at row=20, col=10? | null | 
	8010 | 
	Row-major index = row*width + col = 20*400 + 10 = 8,010. | 
	[
  "CUDA",
  "indexing",
  "row_major",
  "linearization"
] | 
| 3 | 
	4b | 
	short_answer | 
	A 2D matrix has width=400 and height=500 and is stored as a 1D array in column-major order. What is the linear index of the element at row=20, col=10? | null | 
	5020 | 
	Column-major index = col*height + row = 10*500 + 20 = 5,020. | 
	[
  "CUDA",
  "indexing",
  "column_major",
  "linearization"
] | 
| 3 | 
	5 | 
	short_answer | 
	A 3D tensor has width=400 (x), height=500 (y), and depth=300 (z). It is stored as a 1D array in row-major order with index mapping idx = z*height*width + y*width + x. What is the linear index of the element at x=10, y=20, z=5? | null | 
	1008010 | 
	idx = 5*500*400 + 20*400 + 10 = 1,000,000 + 8,000 + 10 = 1,008,010. | 
	[
  "CUDA",
  "indexing",
  "3D",
  "row_major",
  "linearization"
] | 
| 4 | 
	1a | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
(a) How many warps are there per block? | null | 
	4 | 
	Each block has 128 threads and a warp has 32 threads -> 128/32 = 4 warps per block. | 
	[
  "CUDA",
  "warps",
  "launch_config"
] | 
| 4 | 
	1b | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
(b) How many warps are there in the entire grid? | null | 
	32 | 
	Blocks = (1024 + 128 - 1)/128 = 8. Warps per block = 4. Total warps = 8 x 4 = 32. | 
	[
  "CUDA",
  "warps",
  "launch_config"
] | 
| 4 | 
	1c-i | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the statement `if (threadIdx.x < 40 || threadIdx.x >= 104) { ... }`:
(i) How many warps in the grid are active on this statement? | null | 
	24 | 
	Per block: warp 0 (0-31) active; warp 1 (32-63) partially active -> warp active; warp 2 (64-95) inactive; warp 3 (96-127) partially active -> warp active. So 3 active warps/block x 8 blocks = 24. | 
	[
  "CUDA",
  "control_flow",
  "divergence"
] | 
| 4 | 
	1c-ii | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the statement `if (threadIdx.x < 40 || threadIdx.x >= 104) { ... }`:
(ii) How many warps in the grid are divergent on this statement? | null | 
	16 | 
	Per block, warp 1 (32-63) and warp 3 (96-127) have mixed predicates (some threads true, some false) -> 2 divergent warps/block x 8 blocks = 16. | 
	[
  "CUDA",
  "divergence",
  "warps"
] | 
| 4 | 
	1c-iii | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the same statement, what is the SIMD efficiency of warp 0 of block 0? Give a decimal in [0,1] with two decimals; do not include %. | null | 
	1.00 | 
	Warp 0 covers threads 0-31; all satisfy `threadIdx.x < 40`. Active lanes = 32/32 = 100%. | 
	[
  "CUDA",
  "SIMD_efficiency",
  "warps"
] | 
| 4 | 
	1c-iv | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the same statement, what is the SIMD efficiency of warp 1 of block 0? Give a decimal in [0,1] with two decimals; do not include %. | null | 
	0.25 | 
	Warp 1 covers 32-63; only 32-39 (8 lanes) satisfy the predicate. Efficiency = 8/32 = 25%. | 
	[
  "CUDA",
  "SIMD_efficiency",
  "divergence"
] | 
| 4 | 
	1c-v | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the same statement, what is the SIMD efficiency of warp 3 of block 0? Give a decimal in [0,1] with two decimals; do not include %. | null | 
	0.75 | 
	Warp 3 covers 96-127; only 104-127 (24 lanes) satisfy the predicate. Efficiency = 24/32 = 75%. | 
	[
  "CUDA",
  "SIMD_efficiency",
  "divergence"
] | 
| 4 | 
	1d-i | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the statement `if (i % 2 == 0) { ... }`:
(i) How many warps in the grid are active on this statement? | null | 
	32 | 
	All warps reach the statement; within each warp, half the threads satisfy `i % 2 == 0`, but the warp itself is active. Total warps = 32. | 
	[
  "CUDA",
  "control_flow",
  "warps"
] | 
| 4 | 
	1d-ii | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the statement `if (i % 2 == 0) { ... }`:
(ii) How many warps in the grid are divergent on this statement? | null | 
	32 | 
	Within every warp, half the lanes are even and half odd, so every warp diverges on this predicate. | 
	[
  "CUDA",
  "divergence",
  "warps"
] | 
| 4 | 
	1d-iii | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the statement `if (i % 2 == 0) { ... }`:
(iii) What is the SIMD efficiency of warp 0 of block 0? Give a decimal between 0 and 1 with two decimals; do not include a % sign. | null | 
	0.50 | 
	Exactly half the lanes (even indices) are active: 16/32 = 50%. | 
	[
  "CUDA",
  "SIMD_efficiency",
  "divergence"
] | 
| 4 | 
	1e-i | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
Consider the loop `for (unsigned int j = 0; j < 5 - (i % 3); ++j) { ... }`.
(i) How many loop iterations (values of j) execute with no divergence across the entire grid? | null | 
	3 | 
	Threads with i%3 in {0,1,2} have bounds 5,4,3 respectively. For j = 0,1,2 all threads execute; for j = 3 and 4 some threads do not, causing divergence. Hence 3 non-divergent iterations. | 
	[
  "CUDA",
  "divergence",
  "control_flow"
] | 
| 4 | 
	1e-ii | 
	short_answer | 
	Consider the following CUDA kernel and host code:
```c
__global__ void foo_kernel(int* a, int* b) {
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    if (threadIdx.x < 40 || threadIdx.x >= 104) {
        b[i] = a[i] + 1;
    }
    if (i % 2 == 0) {
        a[i] = b[i] * 2;
    }
    for (unsigned int j = 0; j < 5 - (i % 3); ++j) {
        b[i] += j;
    }
}
void foo(int* a_d, int* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);
}
```
For the same loop:
(ii) How many loop iterations (values of j) have divergence somewhere in the grid? | null | 
	2 | 
	Iterations j = 3 and j = 4 are executed by only some threads (depending on i%3), so both are divergent. | 
	[
  "CUDA",
  "divergence",
  "control_flow"
] | 
| 4 | 
	2 | 
	short_answer | 
	Vector addition of length 2000; one output element per thread; thread block size = 512 threads. Using the minimum number of blocks to cover all elements, how many threads are in the grid? | null | 
	2048 | 
	Blocks = ceil(2000/512) = 4; threads = 4 x 512 = 2048. | 
	[
  "CUDA",
  "launch_config",
  "arithmetic"
] | 
| 4 | 
	3 | 
	short_answer | 
	 with 2000 elements and 512 threads per block (minimal blocks), how many warps do you expect to have divergence due to the boundary check (threads skipping work past N)? | null | 
	1 | 
	Total warps = 2048/32 = 64. Only the warp covering thread indices 1984-2015 has some active (<=1999) and some inactive (>=2000) threads. The final warp (2016-2047) has all threads inactive (no divergence). | 
	[
  "CUDA",
  "divergence",
  "warps"
] | 
| 4 | 
	4 | 
	short_answer | 
	A block with 8 threads executes a section before a barrier. Times (us) to reach the barrier are: 2.0, 2.3, 3.0, 2.8, 2.4, 1.9, 2.6, 2.9. Threads then wait at the barrier until the slowest arrives. What percentage of the aggregate thread time is spent waiting? Give a percentage to one decimal place; do not include %. | null | 
	17.1 | 
	Max time = 3.0. Waiting per thread = (3.0 - t). Sum waits = 4.1 us. Aggregate time = 8 x 3.0 = 24 us. Percentage ~ 4.1/24 ~ 17.1%. | 
	[
  "CUDA",
  "synchronization",
  "barriers"
] | 
| 4 | 
	6 | 
	short_answer | 
	An SM supports up to 1536 threads and up to 4 blocks concurrently. For a single SM, which block size among {128, 256, 512, 1024} yields the maximum number of resident threads, and how many threads does it schedule? Return the result as a tuple (BLOCK_SIZE, THREADS). | null | 
	(512, 1536) | 
	For 128: min(4, 1536/128)=4 blocks -> 4x128=512 threads. For 256: 4 blocks -> 1024. For 512: 3 blocks (limited by total threads) -> 3x512=1536. For 1024: 1 block -> 1024. Max is 1536 with 512/block. | 
	[
  "CUDA",
  "occupancy",
  "launch_config"
] | 
| 4 | 
	7 | 
	short_answer | 
	A device allows up to 64 blocks per SM and 2048 threads per SM. For each per-SM assignment below, state if it's possible and the occupancy (% of 2048 threads):
(a) 8 blocks x 128 threads
(b) 16 blocks x 64 threads
(c) 32 blocks x 32 threads
(d) 64 blocks x 32 threads
(e) 32 blocks x 64 threads Provide five semicolon-separated tuples (ans,occ) for (a)-(e), where ans is Yes/No and occ is an integer percent with no %. | null | 
	(Yes,50);(Yes,50);(Yes,50);(Yes,100);(Yes,100) | 
	Check blocks ≤ 64 and total threads ≤ 2048 for each case; occupancy = total_threads / 2048. | 
	[
  "CUDA",
  "occupancy",
  "SM_limits"
] | 
| 4 | 
	8 | 
	short_answer | 
	A GPU has 2048 threads/SM, 32 blocks/SM, and 65,536 registers/SM. For each kernel, can it achieve full occupancy (2048 threads/SM)? If not, what limits it?
(a) 128 threads/block, 30 registers/thread
(b) 32 threads/block, 29 registers/thread
(c) 256 threads/block, 34 registers/thread Provide three semicolon-separated tuples (ans,limit,threads) for (a)-(c), where ans is Yes/No, limit in {none,blocks,registers}, threads is an integer. | null | 
	(Yes,none,2048);(No,blocks,1024);(No,registers,1792) | 
	For each case, first bound by blocks from threads/SM, then check blocks limited by registers per block. Compare resulting resident threads to 2048. | 
	[
  "CUDA",
  "occupancy",
  "registers",
  "SM_limits"
] | 
| 4 | 
	2a-ii | 
	short_answer | 
	We multiply two 8x8 matrices C=AxB on a GPU. One thread computes one C[i,j]. Using shared-memory tiling with tile size T=2 (2x2 tiles), and assuming no caching except shared memory, how many total global-memory element loads (A and B only; ignore the 64 stores of C) are performed for the full multiplication? Give an integer; do not include units or x. | null | 
	2 | 
	Baseline (no tiling, T=1): 64 outputs x 8 MACs/output x 2 loads/MAC = 1024 loads. With tiling of size T, loads = (8/T)^2 x (8/T) x 2T^2 = 1024/T. For T=2 -> 1024/2 = 512. | 
	[
  "CUDA",
  "tiling",
  "matrix_multiplication",
  "memory_bandwidth"
] | 
| 4 | 
	2a-ii | 
	short_answer | 
	Consider an 8x8 matrix multiplication with tile size T=2, what is the reduction factor in global-memory traffic versus the naive untiled case (T=1)? Give an integer; do not include units or x. | null | 
	2 | 
	Untiled: 1024 loads. T=2: 512 loads. Reduction factor = 1024 / 512 = 2x. | 
	[
  "CUDA",
  "tiling",
  "matrix_multiplication",
  "memory_bandwidth"
] | 
| 4 | 
	2b-i | 
	short_answer | 
	Now use tile size T=4 (4x4 tiles) for an 8x8 matrix multiplication. How many total global-memory element loads (A and B only; ignore C stores) are performed? | null | 
	256 | 
	Using loads = 1024/T with N=8, for T=4 we have 1024/4 = 256. | 
	[
  "CUDA",
  "tiling",
  "matrix_multiplication",
  "memory_bandwidth"
] | 
| 4 | 
	2b-ii | 
	short_answer | 
	Consider an 8x8 matrix multiplication with tile size T=4. What is the reduction factor in global-memory traffic relative to the naive untiled case (T=1)? Give an integer; do not include units or x. | null | 
	4 | 
	Untiled: 1024 loads. T=4: 256 loads. Reduction factor = 1024 / 256 = 4x, confirming linear scaling with tile dimension. | 
	[
  "CUDA",
  "tiling",
  "matrix_multiplication",
  "memory_bandwidth"
] | 
| 5 | 
	1 | 
	short_answer | 
	Consider matrix addition C = A + B. Can shared memory be used to reduce global memory bandwidth consumption for this kernel? Briefly justify your answer. | null | 
	No. | 
	In element-wise addition, each output element C[i,j] depends only on A[i,j] and B[i,j] once. Threads do not reuse neighbors' A or B values, so there is no inter-thread temporal locality to exploit. Caching A/B into shared memory would just add extra copies without reducing global loads. | 
	[
  "CUDA",
  "shared_memory",
  "data_reuse",
  "bandwidth"
] | 
| 5 | 
	4 | 
	short_answer | 
	Assume register and shared memory capacities are not limiting. Give one important reason why using shared memory (instead of registers) to hold values fetched from global memory can be valuable. | null | 
	Shared memory enables inter-thread data sharing within a block. | 
	Registers are private to a thread. Shared memory is visible to all threads in a block, so a value fetched once from global memory can be reused by multiple threads, reducing global traffic. | 
	[
  "CUDA",
  "shared_memory",
  "registers",
  "data_sharing"
] | 
| 5 | 
	5 | 
	short_answer | 
	For a tiled matrix-matrix multiplication kernel using 32x32 tiles, what is the reduction in global memory bandwidth usage for the input matrices M and N (compared to the untiled naive access), assuming ideal reuse within a tile? Provide the two reduction factors as M,N (integers); do not include x or text. | null | 
	32,32 | 
	Within a 32x32 tile, each loaded element of M (resp. N) is reused across 32 multiply-accumulates along the tile dimension, replacing 32 separate global loads in the naive scheme. Thus, global loads per useful use drop by ~32x for both inputs. | 
	[
  "CUDA",
  "tiling",
  "matrix_multiplication",
  "bandwidth",
  "reuse"
] | 
| 5 | 
	6 | 
	short_answer | 
	A CUDA kernel is launched with 1000 thread blocks, each with 512 threads. If a variable is declared as a local (per-thread) variable inside the kernel, how many distinct instances of this variable are created during execution? | null | 
	512,000 | 
	Local variables are per-thread. Total threads = 1000 blocks x 512 threads/block = 512,000 instances. | 
	[
  "CUDA",
  "memory_model",
  "locals",
  "threads"
] | 
| 5 | 
	7 | 
	short_answer | 
	A CUDA kernel is launched with 1000 thread blocks, each with 512 threads. if a variable is declared in shared memory, how many distinct instances of this variable are created during execution? | null | 
	1,000 | 
	Shared memory is per-block. There is exactly one instance per block -> 1000 instances total. | 
	[
  "CUDA",
  "shared_memory",
  "blocks",
  "memory_model"
] | 
| 5 | 
	8a | 
	short_answer | 
	For multiplying two NxN matrices without tiling, how many times is each input element requested from global memory? | null | 
	N times. | 
	In the naive kernel each output element recomputes its dot product by reloading the same row/column elements. Each input element participates in N different outputs along the corresponding dimension, leading to N separate loads. | 
	[
  "CUDA",
  "matrix_multiplication",
  "naive",
  "global_memory"
] | 
| 5 | 
	8b | 
	short_answer | 
	For multiplying two NxN matrices with TxT tiling (ideal reuse within tiles), how many times is each input element requested from global memory? | null | 
	N/T times. | 
	Each input element is fetched once per tile-stripe it participates in along the multiply dimension. Tiling reduces redundant loads by a factor of T, so loads per element drop from N to N/T. | 
	[
  "CUDA",
  "tiling",
  "matrix_multiplication",
  "global_memory"
] | 
| 5 | 
	9a | 
	short_answer | 
	A CUDA kernel performs 36 floating-point operations and 7 global 32-bit (4-byte) memory accesses per thread. On a GPU with peak 200 GFLOP/s compute throughput and 100 GB/s memory bandwidth, is the kernel compute-bound or memory-bound? Justify briefly using a roofline-style argument. | null | 
	Memory-bound. | 
	Arithmetic intensity = 36 FLOPs / (7x4 B) = 36/28 ~ 1.286 FLOP/B. Machine balance = 200 GFLOP/s ÷ 100 GB/s = 2.0 FLOP/B. Since 1.286 < 2.0, performance is limited by memory bandwidth. | 
	[
  "roofline",
  "arithmetic_intensity",
  "compute_bound",
  "memory_bound"
] | 
| 5 | 
	9b | 
	short_answer | 
	A CUDA kernel performs 36 floating-point operations and 7 global 32-bit (4-byte) memory accesses per thread. On a GPU with peak 300 GFLOP/s compute throughput and 250 GB/s memory bandwidth, is the kernel compute-bound or memory-bound? Justify briefly using a roofline-style argument. | null | 
	Compute-bound. | 
	Arithmetic intensity = 36 FLOPs / (7x4 B) = 36/28 ~ 1.286 FLOP/B. Machine balance = 300 GFLOP/s ÷ 250 GB/s = 1.2 FLOP/B. Since 1.286 > 1.2, performance is limited by compute throughput. | 
	[
  "roofline",
  "arithmetic_intensity",
  "compute_bound",
  "memory_bound"
] | 
| 5 | 
	10a | 
	short_answer | 
	You are given this tile-transpose kernel (abbrev.):
```cpp
dim3 blockDim(BLOCK_WIDTH,BLOCK_WIDTH);
dim3 gridDim(A_width/blockDim.x, A_height/blockDim.y);
BlockTranspose<<<gridDim, blockDim>>>(A, A_width, A_height);
__global__ void BlockTranspose(float* A_elements, int A_width, int A_height) {
    __shared__ float blockA[BLOCK_WIDTH][BLOCK_WIDTH];
    int baseIdx = blockIdx.x * BLOCK_WIDTH + threadIdx.x;
    baseIdx += (blockIdx.y * BLOCK_WIDTH + threadIdx.y) * A_width;
    blockA[threadIdx.y][threadIdx.x] = A_elements[baseIdx];
    // (no barrier here)
    A_elements[baseIdx] = blockA[threadIdx.x][threadIdx.y];
}
```
For which BLOCK_WIDTH values does this execute correctly? | null | 
	Only BLOCK_WIDTH = 1. | 
	Without a barrier, some threads can read from shared memory before peers have written their elements. With a 1x1 block, there is only one thread and no race; for any larger tile, a race exists. | 
	[
  "CUDA",
  "synchronization",
  "shared_memory",
  "barriers"
] | 
| 5 | 
	10b | 
	short_answer | 
	You are given this tile-transpose kernel (abbreviated):
```cpp
dim3 blockDim(BLOCK_WIDTH, BLOCK_WIDTH);
dim3 gridDim(A_width / blockDim.x, A_height / blockDim.y);
BlockTranspose<<<gridDim, blockDim>>>(A, A_width, A_height);
__global__ void BlockTranspose(float* A_elements, int A_width, int A_height) {
    __shared__ float blockA[BLOCK_WIDTH][BLOCK_WIDTH];
    int baseIdx = blockIdx.x * BLOCK_WIDTH + threadIdx.x;
    baseIdx += (blockIdx.y * BLOCK_WIDTH + threadIdx.y) * A_width;
    blockA[threadIdx.y][threadIdx.x] = A_elements[baseIdx];
    // (no barrier here)
    A_elements[baseIdx] = blockA[threadIdx.x][threadIdx.y];
}
```
Explain the root cause of incorrect execution for BLOCK_WIDTH > 1 and give a minimal fix that makes it correct for any BLOCK_WIDTH >= 1. Give a single token naming the minimal synchronization fix. | null | 
	__syncthreads() | 
	All threads must complete their writes to shared memory before any thread reads `blockA[tx][ty]`. Without a barrier, reads can see stale values (race). Adding `__syncthreads();` between the store and the load enforces correctness for any tile size. | 
	[
  "CUDA",
  "synchronization",
  "shared_memory",
  "barriers"
] | 
| 5 | 
	11a | 
	short_answer | 
	Consider the following CUDA code:
```cpp
__global__ void foo_kernel(float* a, float* b) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    float x[4];
    __shared__ float y_s;
    __shared__ float b_s[128];
    for (unsigned int j = 0; j < 4; ++j) {
        x[j] = a[j * blockDim.x * gridDim.x + i];
    }
    if (threadIdx.x == 0) { y_s = 7.4f; }
    b_s[threadIdx.x] = b[i];
    __syncthreads();
    b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]
         + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);
}
```
How many distinct instances of variable `i` exist during execution? | null | 
	1,024 | 
	Blocks = (1024 + 127) / 128 = 8; threads per block = 128; total threads = 8 x 128 = 1,024. `i` is per-thread. | 
	[
  "CUDA",
  "locals",
  "launch_config",
  "threads"
] | 
| 5 | 
	11b | 
	short_answer | 
	Consider the following CUDA code:
```cpp
__global__ void foo_kernel(float* a, float* b) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    float x[4];
    __shared__ float y_s;
    __shared__ float b_s[128];
    for (unsigned int j = 0; j < 4; ++j) {
        x[j] = a[j * blockDim.x * gridDim.x + i];
    }
    if (threadIdx.x == 0) { y_s = 7.4f; }
    b_s[threadIdx.x] = b[i];
    __syncthreads();
    b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]
         + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);
}
```
How many distinct instances of the local array `x[4]` are created during execution? | null | 
	1,024 | 
	`x` is a per-thread local array. With 1,024 threads total, there are 1,024 instances. | 
	[
  "CUDA",
  "locals",
  "stack_memory",
  "threads"
] | 
| 5 | 
	11c | 
	short_answer | 
	Consider the following CUDA code:
```cpp
__global__ void foo_kernel(float* a, float* b) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    float x[4];
    __shared__ float y_s;
    __shared__ float b_s[128];
    for (unsigned int j = 0; j < 4; ++j) {
        x[j] = a[j * blockDim.x * gridDim.x + i];
    }
    if (threadIdx.x == 0) { y_s = 7.4f; }
    b_s[threadIdx.x] = b[i];
    __syncthreads();
    b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]
         + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);
}
```
How many distinct instances of the shared variable `y_s` are created during execution? | null | 
	8 | 
	Shared memory is per-block. Blocks = (1024 + 127) / 128 = 8 -> 8 instances of `y_s`. | 
	[
  "CUDA",
  "shared_memory",
  "blocks"
] | 
| 5 | 
	11d | 
	short_answer | 
	Consider the following CUDA code:
```cpp
__global__ void foo_kernel(float* a, float* b) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    float x[4];
    __shared__ float y_s;
    __shared__ float b_s[128];
    for (unsigned int j = 0; j < 4; ++j) {
        x[j] = a[j * blockDim.x * gridDim.x + i];
    }
    if (threadIdx.x == 0) { y_s = 7.4f; }
    b_s[threadIdx.x] = b[i];
    __syncthreads();
    b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]
         + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);
}
```
How many distinct instances of the shared array `b_s[128]` are created during execution? | null | 
	8 | 
	Shared arrays are also per-block. With 8 blocks, there are 8 instances of `b_s`. | 
	[
  "CUDA",
  "shared_memory",
  "blocks"
] | 
| 5 | 
	11e | 
	short_answer | 
	Consider the following CUDA code:
```cpp
__global__ void foo_kernel(float* a, float* b) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    float x[4];
    __shared__ float y_s;
    __shared__ float b_s[128];
    for (unsigned int j = 0; j < 4; ++j) {
        x[j] = a[j * blockDim.x * gridDim.x + i];
    }
    if (threadIdx.x == 0) { y_s = 7.4f; }
    b_s[threadIdx.x] = b[i];
    __syncthreads();
    b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]
         + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];
}
void foo(float* a_d, float* b_d) {
    unsigned int N = 1024;
    foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);
}
```
What is the amount of shared memory used per block (in bytes)? Give an integer (bytes); do not include units. | null | 
	516 | 
	`y_s`: 1 float = 4 bytes; `b_s[128]`: 128 floats = 512 bytes; total = 516 bytes. | 
	[
  "CUDA",
  "shared_memory",
  "resources"
] | 
| 5 | 
	11f | 
	short_answer | 
	Consider the following CUDA code and compute the floating-point-operations-per-byte (OP/B) ratio per thread with respect to global memory traffic (assume 4-byte floats, count both reads and the final write):
```cpp
__global__ void foo_kernel(float* a, float* b) {
    unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
    float x[4];
    __shared__ float y_s;
    __shared__ float b_s[128];
    for (unsigned int j = 0; j < 4; ++j) {
        x[j] = a[j * blockDim.x * gridDim.x + i];
    }
    if (threadIdx.x == 0) { y_s = 7.4f; }
    b_s[threadIdx.x] = b[i];
    __syncthreads();
    b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]
         + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];
}
```
What is the OP/B value? Round to 3 decimals; provide only the numeric value. | null | 
	0.417 | 
	Per thread FLOPs = 5 mul + 5 add = 10. Global traffic: 4 reads from `a` (16 B) + 1 read from `b` (4 B) + 1 write to `b` (4 B) = 24 B. OP/B = 10 / 24 ~ 0.417. | 
	[
  "CUDA",
  "operational_intensity",
  "roofline",
  "memory_traffic"
] | 
| 5 | 
	12a | 
	short_answer | 
	GPU limits: 2048 threads/SM, 32 blocks/SM, 65,536 registers/SM, 96 KB shared memory/SM. Kernel uses 64 threads/block, 27 registers/thread, and **4 KB shared memory per block**. Can it reach full occupancy (2048 threads/SM)? If not, what limits it and what is the achieved occupancy? Provide a tuple (Yes|No,limiting_resource,threads) with limiting_resource in {none,shared_memory,blocks,registers}. | null | 
	(No,shared_memory,1536) | 
	Max blocks by threads = 2048/64 = 32 (OK). Registers: 27x2048 = 55,296 < 65,536 (OK). Shared memory: at 4 KB per block, 96 KB/4 KB = 24 blocks fit, so threads = 24x64 = 1,536 -> 75% occupancy. Limiting factor: shared memory per SM. | 
	[
  "CUDA",
  "occupancy",
  "resources",
  "shared_memory",
  "registers"
] | 
| 5 | 
	12b | 
	short_answer | 
	GPU limits: 2048 threads/SM, 32 blocks/SM, 65,536 registers/SM, 96 KB shared memory/SM. Kernel uses 256 threads/block, 31 registers/thread, and **8 KB shared memory per block**. Can it reach full occupancy (2048 threads/SM)? If not, what limits it and what is the achieved occupancy? Provide a tuple (Yes|No,limiting_resource,threads) with limiting_resource in {none,shared_memory,blocks,registers}. | null | 
	(Yes,none,2048) | 
	Max blocks by threads = 2048/256 = 8. Registers: 31x2048 = 63,488 < 65,536 (OK). Shared memory: 8 blocks x 8 KB = 64 KB < 96 KB (OK). Blocks/SM limit is 32 (not binding). All constraints allow 8 blocks x 256 threads = 2048. | 
	[
  "CUDA",
  "occupancy",
  "resources",
  "shared_memory",
  "registers"
] | 
| 6 | 
	2a | 
	mcq | 
	A 2D tiled GEMM uses a BLOCK_SIZExBLOCK_SIZE thread block. Threads are indexed (ty, tx). In each phase, threads cooperatively load: M[row, ph*BLOCK_SIZE + tx] and (corner-turned) N[ph*BLOCK_SIZE + ty, col], where row = blockIdx.y*BLOCK_SIZE + ty and col = blockIdx.x*BLOCK_SIZE + tx. Arrays are row-major 4-byte floats. Warps have 32 lanes and lanes vary along x (a warp spans 32 consecutive tx at fixed ty). Which BLOCK_SIZE guarantees that both the M and N loads of every warp are fully coalesced into a single contiguous segment? | 
	[
  "A. 8",
  "B. 16",
  "C. 32",
  "D. Any power of two"
] | 
	C | 
	With warps laid out along x, coalescing requires each warp to cover 32 consecutive tx at fixed ty so addresses are contiguous for both loads. BLOCK_SIZE=32 aligns one warp per row; 8 or 16 split a warp across multiple rows. | 
	[
  "CUDA",
  "coalescing",
  "tiling",
  "warps",
  "memory"
] | 
| 6 | 
	2b | 
	mcq | 
	Using the setup described in a BLOCK_SIZE x BLOCK_SIZE tiled GEMM with corner-turned N load, row-major floats, and 32-lane warps spanning x (BLOCK_SIZExBLOCK_SIZE tiled GEMM, corner-turned N load, row-major floats, warps span x with 32 lanes), if BLOCK_SIZE=16, what is the coalescing behavior of a warp's global loads for M and N? | 
	[
  "A. Fully coalesced into a single contiguous segment per warp",
  "B. Two contiguous segments per warp (warp spans two 16-wide rows)",
  "C. Uncoalesced/random access",
  "D. Depends only on base address alignment"
] | 
	B | 
	A 32-lane warp spans two 16-wide rows at fixed ty, so both M and N loads become two 16-element contiguous segments per warp rather than one 32-element segment. | 
	[
  "CUDA",
  "coalescing",
  "warps",
  "tiling"
] | 
| 6 | 
	3a | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: a[i]. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	A | 
	Consecutive threads in a warp access consecutive elements a[i]; this is the canonical coalesced pattern. | 
	[
  "CUDA",
  "coalescing",
  "global_memory"
] | 
| 6 | 
	3b | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: a_s[threadIdx.x], where a_s is declared in __shared__ memory. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	C | 
	Coalescing applies to global memory transactions. Shared memory has different banking rules; coalescing classification is not applicable. | 
	[
  "CUDA",
  "shared_memory",
  "coalescing"
] | 
| 6 | 
	3c | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: b[j*blockDim.x*gridDim.x + i] with j fixed inside the loop. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	A | 
	For a fixed j, threads in a warp access consecutive indices offset by a constant base; addresses are contiguous across the warp. | 
	[
  "CUDA",
  "coalescing",
  "global_memory"
] | 
| 6 | 
	3d | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: c[i*4 + j], with j fixed inside the loop. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	B | 
	Across a warp, i increases by 1 so addresses stride by 4 elements (16 B) per thread, leading to multiple memory transactions (not contiguous). | 
	[
  "CUDA",
  "coalescing",
  "strided_access"
] | 
| 6 | 
	3e | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: bc_s[j*256 + threadIdx.x], where bc_s is __shared__ memory. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	C | 
	Shared memory access; global-memory coalescing classification does not apply. | 
	[
  "CUDA",
  "shared_memory"
] | 
| 6 | 
	3f | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: a_s[threadIdx.x] (read), where a_s is __shared__ memory. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	C | 
	Shared memory read; coalescing is a global-memory concept. | 
	[
  "CUDA",
  "shared_memory"
] | 
| 6 | 
	3g | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: d[i + 8] (global write). | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	A | 
	The +8 is a constant offset; adjacent threads still write consecutive locations, so the warp issues contiguous transactions. | 
	[
  "CUDA",
  "coalescing",
  "global_store"
] | 
| 6 | 
	3h | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: bc_s[threadIdx.x*4] (read), where bc_s is __shared__ memory. | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	C | 
	Shared memory read; coalescing classification is not applicable (banking rules apply instead). | 
	[
  "CUDA",
  "shared_memory"
] | 
| 6 | 
	3i | 
	mcq | 
	Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: e[i*8] (global write). | 
	[
  "A. Coalesced",
  "B. Uncoalesced",
  "C. Not applicable (shared memory)",
  "D. Unaligned but still fully coalesced due to caching"
] | 
	B | 
	Thread addresses stride by 8 elements (32 B) per lane across the warp; not a single contiguous segment, so it's uncoalesced. | 
	[
  "CUDA",
  "coalescing",
  "strided_access",
  "global_store"
] | 
| 6 | 
	4a | 
	short_answer | 
	Arithmetic intensity (FLOP/B) for naive GEMM: One thread computes P[i,j] with a loop over k=0..n-1, doing one multiply and one add per k (2 FLOPs) and reading M[i,k] and N[k,j] from global memory each iteration. Assume 4-byte floats and ignore output writes. What is the arithmetic intensity? | null | 
	0.25 | 
	Per output: 2n FLOPs and 2n reads x 4 B = 8n B -> (2n)/(8n) = 0.25 FLOP/B. | 
	[
  "roofline",
  "arithmetic_intensity",
  "GEMM"
] | 
| 6 | 
	4b | 
	short_answer | 
	Arithmetic intensity (FLOP/B) for tiled GEMM with BLOCK_SIZE=T: Each phase loads one TxT tile of M and one TxT tile of N from global memory and reuses them from shared memory to produce a TxT output tile. Assume 4-byte floats and ignore output writes. For T=32, what is the arithmetic intensity? | null | 
	8 | 
	Global reads per output = 2n/T elements -> 8n/T bytes; FLOPs per output = 2n. Intensity = (2n)/(8n/T) = T/4 = 8 for T=32. | 
	[
  "roofline",
  "arithmetic_intensity",
  "tiling",
  "GEMM"
] | 
| 6 | 
	4c | 
	short_answer | 
	Arithmetic intensity (FLOP/B) for tiled GEMM with thread coarsening factor C=4: The M tile loaded from global memory is reused across 4 adjacent output tiles; N tiles are not further reused beyond standard tiling. Assume 4-byte floats and ignore output writes. For T=32, what is the arithmetic intensity? | null | 
	12.8 | 
	Reads per output: M contributes n/(4T), N contributes n/T -> (5n)/(4T) elements -> (5n)/T bytes. FLOPs per output = 2n. Intensity = (2n)/((5n)/T) = 2T/5 = 12.8 for T=32. | 
	[
  "roofline",
  "arithmetic_intensity",
  "tiling",
  "coarsening",
  "GEMM"
] | 
| 7 | 
	2 | 
	mcq | 
	Perform 1D discrete convolution with zero-padding (output length equals input length): N = {4, 1, 3, 2, 3}, F = {2, 1, 4}. Use P[i] = sum_{k=0..2} F[k] * N[i - 1 + k], treating out-of-bounds N as 0. What is P? | 
	[
  "A. [8, 21, 13, 20, 7]",
  "B. [4, 12, 17, 14, 9]",
  "C. [2, 9, 12, 15, 11]",
  "D. [0, 8, 21, 13, 20]"
] | 
	A | 
	Zero-padding with radius r=1 yields P = [8, 21, 13, 20, 7]. | 
	[
  "convolution",
  "1D",
  "discrete",
  "zero_padding"
] | 
| 7 | 
	3a | 
	mcq | 
	In 1D discrete convolution with zero-padding, what operation does the filter [0, 1, 0] primarily perform on a signal x? | 
	[
  "A. Identity (pass-through): y[i] = x[i]",
  "B. Right shift by 1: y[i] = x[i-1]",
  "C. Left shift by 1: y[i] = x[i+1]",
  "D. 3-point moving average"
] | 
	A | 
	[0,1,0] preserves the center sample, acting as an identity under the stated assumptions. | 
	[
  "convolution",
  "filters",
  "signal_processing"
] | 
| 7 | 
	3b | 
	mcq | 
	In 1D discrete convolution with zero-padding, what operation does the filter [0, 0, 1] primarily perform? | 
	[
  "A. Identity (pass-through)",
  "B. Right shift by 1 sample",
  "C. Left shift by 1 sample",
  "D. 3-point moving average"
] | 
	B | 
	With the conventions used here, [0,0,1] produces a right shift by 1 (y[i] ~ x[i-1]). | 
	[
  "convolution",
  "filters",
  "signal_processing"
] | 
| 7 | 
	3c | 
	mcq | 
	In 1D discrete convolution with zero-padding, what operation does the filter [1, 0, 0] primarily perform? | 
	[
  "A. Right shift by 1 sample",
  "B. Left shift by 1 sample",
  "C. Identity (pass-through)",
  "D. High-pass smoothing"
] | 
	B | 
	With the conventions used here, [1,0,0] produces a left shift by 1 (y[i] ~ x[i+1]). | 
	[
  "convolution",
  "filters",
  "signal_processing"
] | 
| 7 | 
	3d | 
	mcq | 
	In 1D discrete convolution, what is the primary effect of the filter [-1/2, 0, 1/2]? | 
	[
  "A. Low-pass smoothing",
  "B. First-derivative (edge detection)",
  "C. Identity",
  "D. Right shift by 1"
] | 
	B | 
	It approximates a first derivative, responding to rapid changes (edges). | 
	[
  "convolution",
  "edge_detection",
  "derivative"
] | 
| 7 | 
	3e | 
	mcq | 
	In 1D discrete convolution, what is the primary effect of the filter [1/3, 1/3, 1/3]? | 
	[
  "A. High-pass edge enhancer",
  "B. Left shift by 1",
  "C. 3-point moving average (smoothing)",
  "D. Identity"
] | 
	C | 
	Equal weights average the local neighborhood, smoothing noise. | 
	[
  "convolution",
  "smoothing",
  "moving_average"
] | 
| 7 | 
	4a | 
	mcq | 
	1D convolution on an array of length N with an odd-sized filter of length M = 2r+1 (r = (M-1)/2). How many ghost (zero-padded) cells are there in total? | 
	[
  "A. r",
  "B. 2r",
  "C. M - 1",
  "D. N + M"
] | 
	C | 
	Total ghost cells = r on the left + r on the right = 2r = M - 1. | 
	[
  "convolution",
  "ghost_cells",
  "padding"
] | 
| 7 | 
	4b | 
	mcq | 
	1D convolution on array length N with odd-sized filter M (using zero-padding, counting multiplications even when reading zeros). How many total multiplications are performed? | 
	[
  "A. N × M",
  "B. N × (M - 1)",
  "C. (N - M) × M",
  "D. 2N × M"
] | 
	A | 
	Each of N outputs multiplies M taps (zeros included) -> NxM. | 
	[
  "convolution",
  "complexity"
] | 
| 7 | 
	5a | 
	mcq | 
	2D convolution on an NxN image with an odd-sized MxM filter (M = 2r+1). With zero-padding, how many ghost cells surround the image in total? | 
	[
  "A. 4Nr",
  "B. 2N(M-1)",
  "C. 4r(N + r)",
  "D. N^2 - (N - 2r)^2"
] | 
	C | 
	Padding adds r rows/cols around; total ghost cells = 4r(N + r). | 
	[
  "convolution",
  "2D",
  "ghost_cells"
] | 
| 7 | 
	5b | 
	mcq | 
	2D convolution on an NxN image with an MxM filter (zero-padding, counting multiplications even on zeros). How many total multiplications are performed? | 
	[
  "A. N^2 × M^2",
  "B. N × M",
  "C. (N - M + 1)^2 × M^2",
  "D. 2N × M"
] | 
	A | 
	Each of N^2 outputs multiplies M^2 taps -> N^2xM^2. | 
	[
  "convolution",
  "2D",
  "complexity"
] | 
| 7 | 
	6a | 
	mcq | 
	2D convolution on an N1xN2 image with an odd-sized M1xM2 filter. Let r1=(M1-1)/2 and r2=(M2-1)/2. With zero-padding, how many ghost cells are there in total? | 
	[
  "A. 2(N1 r2 + N2 r1)",
  "B. 2(N1 r2 + N2 r1) + 4 r1 r2",
  "C. (N1 + N2)(r1 + r2)",
  "D. N1 N2 (r1 + r2)"
] | 
	B | 
	Edges contribute 2(N1 r2 + N2 r1); corners add 4 r1 r2. | 
	[
  "convolution",
  "2D",
  "ghost_cells",
  "rectangular"
] | 
| 7 | 
	6b | 
	mcq | 
	2D convolution on an N1xN2 image with an M1xM2 filter (zero-padding, counting multiplications even on zeros). How many total multiplications are performed? | 
	[
  "A. N1 N2 M1 M2",
  "B. N1 N2 (M1 + M2)",
  "C. (N1 - M1 + 1)(N2 - M2 + 1) M1 M2",
  "D. 2 N1 N2 M1"
] | 
	A | 
	Each of N1xN2 outputs multiplies M1xM2 taps -> N1 N2 M1 M2. | 
	[
  "convolution",
  "2D",
  "complexity",
  "rectangular"
] | 
| 7 | 
	7a | 
	mcq | 
	A 2D tiled convolution uses an output tile of size TxT and an odd-sized filter with radius r=(M-1)/2. Input tiles are (T+2r)x(T+2r) due to halo. For an NxN output, how many thread blocks are needed? | 
	[
  "A. ceil(N/T) × ceil(N/T)",
  "B. (N/T) × (N/T) (truncate)",
  "C. N × N",
  "D. ceil(N/(T+2r)) × ceil(N/(T+2r))"
] | 
	A | 
	Each block produces a TxT output tile; tiling the output requires ceil(N/T) in each dimension. | 
	[
  "tiled_convolution",
  "grid_sizing"
] | 
| 7 | 
	7b | 
	mcq | 
	In a tiled 2D convolution setup where the output tile is TxT and the halo radius is r, the block loads a (T+2r)x(T+2r) input tile into shared memory. Assuming one thread per input-tile element, how many threads are needed per block? | 
	[
  "A. T^2",
  "B. (T+r)^2",
  "C. (T+2r)^2",
  "D. 2T(T+r)"
] | 
	C | 
	One thread per input-tile element -> (T+2r)^2 threads per block. | 
	[
  "tiled_convolution",
  "block_size",
  "threads_per_block"
] | 
| 7 | 
	7c | 
	mcq | 
	In a tiled 2D convolution setup where the output tile is TxT and the halo radius is r, the block loads a (T+2r)x(T+2r) input tile into shared memory and allocates a shared-memory array to hold this input tile. How much shared memory is needed per block (in bytes) for single-precision floats? | 
	[
  "A. T^2 × 4",
  "B. (T+2r)^2 × 4",
  "C. (T+2r) × 4",
  "D. 0"
] | 
	B | 
	The shared tile size is (T+2r)x(T+2r) floats; at 4 bytes/float -> (T+2r)^2 x 4 bytes. | 
	[
  "tiled_convolution",
  "shared_memory"
] | 
| 7 | 
	7d1 | 
	mcq | 
	Consider a 2D convolution implementation that does NOT allocate any shared-memory input tile. Each thread block contains TxT threads, and each thread computes exactly one output element of a TxT output tile. All input reads are served directly from global memory (relying only on hardware caches). For an NxN output, how many thread blocks are required? | 
	[
  "A. ceil(N/T) × ceil(N/T)",
  "B. (N/T) × (N/T) (truncate)",
  "C. N × N",
  "D. ceil(N/(T+2r)) × ceil(N/(T+2r))"
] | 
	A | 
	Each block covers a TxT region of the output, so the grid needs ceil(N/T) blocks along each dimension. | 
	[
  "tiled_convolution",
  "grid_sizing",
  "cache_based"
] | 
| 7 | 
	7d2 | 
	mcq | 
	Consider a 2D convolution implementation that does NOT allocate any shared-memory input tile. Each thread block contains TxT threads, and each thread computes exactly one output element of a TxT output tile. All input reads are served directly from global memory (relying only on hardware caches). How many threads are launched per block? | 
	[
  "A. T^2",
  "B. (T+2r)^2",
  "C. 2T(T+r)",
  "D. N^2"
] | 
	A | 
	One thread per output element over a TxT tile yields TxT = T^2 threads per block. | 
	[
  "tiled_convolution",
  "threads_per_block",
  "cache_based"
] | 
| 7 | 
	7d3 | 
	mcq | 
	Consider a 2D convolution implementation that does NOT allocate any shared-memory input tile. Each thread block contains TxT threads, and each thread computes exactly one output element of a TxT output tile. All input reads are served directly from global memory (relying only on hardware caches). How much shared memory is needed per block (in bytes) to hold the input tile when using single-precision floats? | 
	[
  "A. (T+2r)^2 × 4",
  "B. T^2 × 4",
  "C. 0",
  "D. 2(T+2r)^2 × 4"
] | 
	C | 
	By definition, this variant allocates no shared-memory input tile; it relies solely on hardware caches. | 
	[
  "tiled_convolution",
  "shared_memory",
  "cache_based"
] | 
| 8 | 
	1a | 
	short_answer | 
	A 3D seven-point stencil is applied on a cubic grid of size 120x120x120 (including boundary cells). The kernel only writes interior points (i=1..118, j=1..118, k=1..118). How many output grid points are computed per sweep? | null | 
	1643032 | 
	Interior count = (120-2)^3 = 118^3 = 1,643,032. | 
	[
  "stencil",
  "3D",
  "indexing",
  "counts"
] | 
| 8 | 
	1b | 
	short_answer | 
	A basic 3D stencil kernel launches one thread per grid point over a 120x120x120 domain using blocks of size 8x8x8 threads (no overhang trimming). Using ceil division per dimension, how many thread blocks are launched in total? | null | 
	3375 | 
	Blocks per dim = ceil(120/8) = 15 -> total blocks = 15^3 = 3,375. | 
	[
  "CUDA",
  "launch_config",
  "3D",
  "ceil_div"
] | 
| 8 | 
	1c | 
	short_answer | 
	A shared-memory tiled 3D stencil uses IN_TILE_DIM = 8 and a radius r = 1, so OUT_TILE_DIM = IN_TILE_DIM - 2r = 6. Over a 120x120x120 domain, blocks are placed per OUT_TILE_DIM using ceil division per dimension. How many thread blocks are launched in total? | null | 
	8000 | 
	Blocks per dim = ceil(120/6) = 20 -> total blocks = 20^3 = 8,000. | 
	[
  "CUDA",
  "tiling",
  "launch_config",
  "3D"
] | 
| 8 | 
	1d | 
	short_answer | 
	A coarsened/tiled 3D stencil uses 2D thread blocks of IN_TILE_DIMxIN_TILE_DIM = 32x32 (z handled by coarsening) with radius r = 1, so OUT_TILE_DIM = 30. Over a 120x120x120 domain, blocks are placed on a 3D grid using ceil division by OUT_TILE_DIM in each dimension. How many thread blocks are launched in total? | null | 
	64 | 
	Blocks per dim = ceil(120/30) = 4 -> total blocks = 4^3 = 64. | 
	[
  "CUDA",
  "thread_coarsening",
  "tiling",
  "launch_config"
] | 
| 8 | 
	2a | 
	short_answer | 
	A seven-point 3D stencil uses thread blocks of size IN_TILE_DIMxIN_TILE_DIM = 32x32 (z handled by coarsening). The block processes Z_COARSENING = 16 consecutive output z-planes. With radius r = 1, the block must load halo planes before the first and after the last output plane. How many input elements does a single block load over its lifetime? Assume each loaded plane is 32x32 elements. | null | 
	18432 | 
	Planes loaded = (16 output) + 2 halo = 18 planes; per plane 32x32=1024 -> 18x1024 = 18,432 elements. | 
	[
  "stencil",
  "thread_coarsening",
  "data_movement"
] | 
| 8 | 
	2b | 
	short_answer | 
	Using IN_TILE_DIM=32 and radius r=1 (so OUT_TILE_DIM = 30) with Z_COARSENING = 16, how many output elements does a single block compute over its lifetime? Each output z-plane contributes OUT_TILE_DIM x OUT_TILE_DIM elements. | null | 
	14400 | 
	Per plane: 30x30 = 900 outputs; over 16 planes: 900x16 = 14,400 outputs. | 
	[
  "stencil",
  "throughput",
  "counts"
] | 
| 8 | 
	2c | 
	short_answer | 
	For a 3D seven-point stencil with IN_TILE_DIM = 32 and radius r = 1 (so OUT_TILE_DIM = 30), and Z_COARSENING = 16, a block loads 18,432 input elements and computes 14,400 output elements. Assume 32-bit floats (4 bytes) and that each output performs 13 FLOPs (7 multiplies + 6 adds). What is the OP/B ratio for reads only? Do not include units; provide a decimal number. | null | 
	2.5390625 | 
	FLOPs = 14,400x13 = 187,200. Bytes read = 18,432x4 = 73,728. OP/B = 187,200 / 73,728 ~ 2.5390625. | 
	[
  "roofline",
  "arithmetic_intensity",
  "stencil"
] | 
PMPP Dataset
This repository provides two CUDA-focused datasets prepared by Sinatras and sponsored by Prime Intellect. Both datasets are based on Programming Massively Parallel Processors (4th Ed.) with additional coding evaluation harnesses at https://github.com/SinatrasC/pmpp-eval to be used by PMMP env in prime-environments.
Overview
- Languages: English
- License: MIT
- Curated by: Sinatras (https://github.com/SinatrasC)
- Sponsored by: Prime Intellect
- Derived from: PMPP 4th Edition (Kirk & Hwu)
Dataset Details
pmpp_qa
- Composition: 61 MCQ + 77 short-answer items.
- Fields: chapter,exercise,type,question,answer,explanation,topic_tags, and optionalchoices.
- Topics emphasize CUDA indexing, occupancy, memory hierarchy, MPI, and dynamic parallelism.
pmpp_coding
- 53 coding tasks. Every entry corresponds to evaluation-tasks/<id>/student_kernel.cuand includes runner metadata.
- Fields: id,task_dir,student_file(alwaysstudent_kernel.cu), optional test targets/executables, and the trimmed CUDA skeleton.
- Some tasks export host wrappers (e.g., device property collection, one-pass radix) rather than __global__kernels. Tests under the same directory call the exported symbols.
Coding Dataset Evaluation Sample
Dataset was evaluated using the coding evaluation harness within the PMPP env on prime-environments (https://github.com/PrimeIntellect-ai/prime-environments/)
Model Performance
| Model | Total Tasks | Success Rate | Rollouts | 
|---|---|---|---|
| Qwen/Qwen3-Next-80B-A3B-Thinking | 53 | 24.5% (39/159) | 3 per task | 
Top Performing Tasks
| Task | Success | Description | 
|---|---|---|
| ch02-vecadd-single-turn | 3/3 | Vector addition kernel | 
| ch03-rgb2gray-single-turn | 3/3 | RGB to grayscale conversion | 
| ch09-histogram-naive-single-turn | 3/3 | Histogram computation | 
| ch09-histogram-shared-single-turn | 3/3 | Histogram computation | 
| ch14-spmv-csr-thread-per-row-single | 3/3 | Sparse matrix-vector multiply | 
| ch14-spmv-coo-single | 3/3 | Sparse matrix-vector multiply | 
| ch14-spmv-ell-single | 3/3 | Sparse matrix-vector multiply | 
| ch18-energy-gather-coarsened-single | 3/3 | Energy simulation kernel | 
Most Challenging Areas (0% Success)
| Challenge Category | Failed Tasks | Examples | 
|---|---|---|
| Matrix Operations | 6 tasks | Matrix multiplication variants, tiled algorithms | 
| Advanced Algorithms | 8 tasks | Sorting, reduction, merge operations | 
| Memory Optimization | 12 tasks | Shared memory, coalescing, thread coarsening | 
| MPI Integration | 3 tasks | Multi-GPU communication patterns | 
| Dynamic Parallelism | 3 tasks | Parent-child kernel launches | 
| Graph Algorithms | 3 tasks | BFS, sparse data structures | 
Intended Use
- pmpp_qa: Evaluate or fine-tune GPU aware assistants on conceptual CUDA/MPI reasoning.
- pmpp_code: Evaluate or fine-tune code-generation capabilities.
Limitations
- Specialized, CUDA-focused coding evaluation harness only.
- Some coding tasks require runtime configuration (e.g., enabling device heap). The pmpp-eval harness handles those details.
Acknowledgements & Citation
Grateful acknowledgment to Prime Intellect for sponsoring this release and to the PMPP community for foundational materials. Additional inspiration and reference code were drawn from the open solution set at https://github.com/tugot17/pmpp. If you build on these datasets, please cite both sources:
@book{kirk2016programming,
  title     = {Programming Massively Parallel Processors: A Hands-on Approach},
  author    = {Kirk, David B. and Hwu, Wen-mei W.},
  edition   = {4th},
  year      = {2016},
  publisher = {Morgan Kaufmann}
}
@misc{pmpp_eval,
  author = {Sinatras},
  title  = {pmpp-eval},
  year   = {2025},
  url    = {https://github.com/SinatrasC/pmpp-eval}
}
For questions or contributions, open an issue in https://github.com/SinatrasC/pmpp-eval.
- Downloads last month
- 126
