|
On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like: |
|
GPU0 GPU1 CPU Affinity NUMA Affinity |
|
GPU0 X NV2 0-23 N/A |
|
GPU1 NV2 X 0-23 N/A |
|
on a different machine w/o NVLink we may see: |
|
GPU0 GPU1 CPU Affinity NUMA Affinity |
|
GPU0 X PHB 0-11 N/A |
|
GPU1 PHB X 0-11 N/A |
|
The report includes this legend: |
|
X = Self |
|
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) |
|
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node |
|
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) |
|
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) |
|
PIX = Connection traversing at most a single PCIe bridge |
|
NV# = Connection traversing a bonded set of # NVLinks |
|
So the first report NV2 tells us the GPUs are interconnected with 2 NVLinks, and the second report PHB we have a typical consumer-level PCIe+Bridge setup. |
|
Check what type of connectivity you have on your setup. |