Pytorch学习记录-卷积神经网络CNN

新建 Microsoft PowerPoint 演示文稿 (2).jpg

Pytorch学习记录-卷积神经网络CNN
开始卷积神经网络部分。
依旧使用MNIST数据集

1. 引入必须库&设定超参数

一样的套路

import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
import torch

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

# print(device)
# 超参数
num_epochs = 10
num_classes = 10
batch_size = 100
learning_rate = 0.001

2. 导入数据

train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = torchvision.datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)

使用的batch_size为64,有的人好奇为什么使用64,或者32,我的理解是这样的,当我们的数据大小为2的幂次数,计算机会计算的特别快,因为计算机是二进制嘛,如果是2的幂次数的话,计算机很容易通过移位来进行计算

3. 构建模型

可以看到,这是一个简单的CNN,和Lennet比少了几层。

  • 首先定义16个卷积核,卷积核大小为5,padding=2,
  • 再次定义32个卷积核,大小和第一层一致
  • 定义Relu函数
  • 定义全连接1568到10
class ConvNet(nn.Module):
    def __init__(self, num_classes=10):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(16),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2)
        )
        self.layer2 = nn.Sequential(
            nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2)
        )
        self.fc = nn.Linear(7 * 7 * 32, num_classes)

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0), -1) # 为什么要做reshape求说明 
        out = self.fc(out)

        return out


model = ConvNet().to(device)

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

4. 训练网络

total_step = len(train_loader)
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        images = images.to(device)
        labels = labels.to(device)

        outputs = model(images)
        loss = criterion(outputs, labels)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if (i + 1) % 100 == 0:
            print('Epoch: [{}/{}],Step: [{}/{}],Loss: {:.4f}'.format(epoch + 1, num_epochs, i + 1, total_step,

5. 测试网络

model.eval()
with torch.no_grad():
    correct = 0
    total = 0
    for images, labels in test_loader:
        images = images.to(device)
        labels = labels.to(device)

        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct+=(predicted==labels).sum().item()
    print('test accuracy {}%'.format(100*correct/total))

torch.save(model.state_dict(),'CNNModel.ckpt')
Epoch: [1/10],Step: [100/600],Loss: 0.2185
Epoch: [1/10],Step: [200/600],Loss: 0.0665
Epoch: [1/10],Step: [300/600],Loss: 0.0976
Epoch: [1/10],Step: [400/600],Loss: 0.1496
Epoch: [1/10],Step: [500/600],Loss: 0.0542
Epoch: [1/10],Step: [600/600],Loss: 0.0821
Epoch: [2/10],Step: [100/600],Loss: 0.0519
Epoch: [2/10],Step: [200/600],Loss: 0.0552
Epoch: [2/10],Step: [300/600],Loss: 0.0513
Epoch: [2/10],Step: [400/600],Loss: 0.0843
Epoch: [2/10],Step: [500/600],Loss: 0.0075
Epoch: [2/10],Step: [600/600],Loss: 0.0337
Epoch: [3/10],Step: [100/600],Loss: 0.0051
Epoch: [3/10],Step: [200/600],Loss: 0.0540
Epoch: [3/10],Step: [300/600],Loss: 0.0116
Epoch: [3/10],Step: [400/600],Loss: 0.0093
Epoch: [3/10],Step: [500/600],Loss: 0.0119
Epoch: [3/10],Step: [600/600],Loss: 0.0296
Epoch: [4/10],Step: [100/600],Loss: 0.0081
Epoch: [4/10],Step: [200/600],Loss: 0.0509
Epoch: [4/10],Step: [300/600],Loss: 0.0108
Epoch: [4/10],Step: [400/600],Loss: 0.0018
Epoch: [4/10],Step: [500/600],Loss: 0.0610
Epoch: [4/10],Step: [600/600],Loss: 0.0522
Epoch: [5/10],Step: [100/600],Loss: 0.0084
Epoch: [5/10],Step: [200/600],Loss: 0.0176
Epoch: [5/10],Step: [300/600],Loss: 0.0575
Epoch: [5/10],Step: [400/600],Loss: 0.0024
Epoch: [5/10],Step: [500/600],Loss: 0.0306
Epoch: [5/10],Step: [600/600],Loss: 0.0194
Epoch: [6/10],Step: [100/600],Loss: 0.0312
Epoch: [6/10],Step: [200/600],Loss: 0.0595
Epoch: [6/10],Step: [300/600],Loss: 0.0082
Epoch: [6/10],Step: [400/600],Loss: 0.0536
Epoch: [6/10],Step: [500/600],Loss: 0.0034
Epoch: [6/10],Step: [600/600],Loss: 0.0083
Epoch: [7/10],Step: [100/600],Loss: 0.0167
Epoch: [7/10],Step: [200/600],Loss: 0.0181
Epoch: [7/10],Step: [300/600],Loss: 0.0575
Epoch: [7/10],Step: [400/600],Loss: 0.0056
Epoch: [7/10],Step: [500/600],Loss: 0.0079
Epoch: [7/10],Step: [600/600],Loss: 0.0089
Epoch: [8/10],Step: [100/600],Loss: 0.0012
Epoch: [8/10],Step: [200/600],Loss: 0.0022
Epoch: [8/10],Step: [300/600],Loss: 0.0009
Epoch: [8/10],Step: [400/600],Loss: 0.0275
Epoch: [8/10],Step: [500/600],Loss: 0.0105
Epoch: [8/10],Step: [600/600],Loss: 0.0012
Epoch: [9/10],Step: [100/600],Loss: 0.0079
Epoch: [9/10],Step: [200/600],Loss: 0.0101
Epoch: [9/10],Step: [300/600],Loss: 0.0095
Epoch: [9/10],Step: [400/600],Loss: 0.0062
Epoch: [9/10],Step: [500/600],Loss: 0.0082
Epoch: [9/10],Step: [600/600],Loss: 0.0025
Epoch: [10/10],Step: [100/600],Loss: 0.0081
Epoch: [10/10],Step: [200/600],Loss: 0.0006
Epoch: [10/10],Step: [300/600],Loss: 0.0452
Epoch: [10/10],Step: [400/600],Loss: 0.0033
Epoch: [10/10],Step: [500/600],Loss: 0.0237
Epoch: [10/10],Step: [600/600],Loss: 0.0258
test accuracy 99.05%

你可能感兴趣的:(Pytorch学习记录-卷积神经网络CNN)