GPU

查看显卡信息

In [1]:
!nvidia-smi
Tue Jun  1 15:40:45 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  Off  | 00000000:00:1B.0 Off |                    0 |
| N/A   56C    P0    55W / 300W |   8124MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  Off  | 00000000:00:1C.0 Off |                    0 |
| N/A   43C    P0    51W / 300W |   4252MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2...  Off  | 00000000:00:1D.0 Off |                    0 |
| N/A   41C    P0    40W / 300W |     11MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2...  Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   62C    P0    62W / 300W |   1582MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      2277      C   ...buntu/miniconda3/envs/d2l-en/bin/python  3289MiB |
|    0    127232      C   ...buntu/miniconda3/envs/d2l-en/bin/python  1389MiB |
+-----------------------------------------------------------------------------+

计算设备

In [2]:
import torch
from torch import nn

torch.device('cpu'), torch.cuda.device('cuda'), torch.cuda.device('cuda:1')
Out[2]:
(device(type='cpu'),
 <torch.cuda.device at 0x7f723468cdc0>,
 <torch.cuda.device at 0x7f7234655310>)

查询可用gpu的数量

In [3]:
torch.cuda.device_count()
Out[3]:
2

这两个函数允许我们在请求的GPU不存在的情况下运行代码

In [4]:
def try_gpu(i=0):  
    """如果存在,则返回gpu(i),否则返回cpu()。"""
    if torch.cuda.device_count() >= i + 1:
        return torch.device(f'cuda:{i}')
    return torch.device('cpu')

def try_all_gpus():  
    """返回所有可用的GPU,如果没有GPU,则返回[cpu(),]。"""
    devices = [
        torch.device(f'cuda:{i}') for i in range(torch.cuda.device_count())]
    return devices if devices else [torch.device('cpu')]

try_gpu(), try_gpu(10), try_all_gpus()
Out[4]:
(device(type='cuda', index=0),
 device(type='cpu'),
 [device(type='cuda', index=0), device(type='cuda', index=1)])

查询张量所在的设备

In [5]:
x = torch.tensor([1, 2, 3])
x.device
Out[5]:
device(type='cpu')

存储在GPU上

In [6]:
X = torch.ones(2, 3, device=try_gpu())
X
Out[6]:
tensor([[1., 1., 1.],
        [1., 1., 1.]], device='cuda:0')

第二个GPU上创建一个随机张量

In [7]:
Y = torch.rand(2, 3, device=try_gpu(1))
Y
Out[7]:
tensor([[0.9333, 0.8735, 0.7784],
        [0.3453, 0.5509, 0.3475]], device='cuda:1')

要计算X + Y,我们需要决定在哪里执行这个操作

In [8]:
Z = X.cuda(1)
print(X)
print(Z)
tensor([[1., 1., 1.],
        [1., 1., 1.]], device='cuda:0')
tensor([[1., 1., 1.],
        [1., 1., 1.]], device='cuda:1')

现在数据在同一个GPU上(ZY都在),我们可以将它们相加

In [9]:
Y + Z
Out[9]:
tensor([[1.9333, 1.8735, 1.7784],
        [1.3453, 1.5509, 1.3475]], device='cuda:1')
In [10]:
Z.cuda(1) is Z
Out[10]:
True

神经网络与GPU

In [12]:
net = nn.Sequential(nn.Linear(3, 1))
net = net.to(device=try_gpu())

net(X)
Out[12]:
tensor([[-0.8412],
        [-0.8412]], device='cuda:0', grad_fn=<AddmmBackward>)

确认模型参数存储在同一个GPU上

In [13]:
net[0].weight.data.device
Out[13]:
device(type='cuda', index=0)