Pytroch入坑 2. 跑一个Minsit的CNN网络

0.前言

Pytroch的动态图、自动求导等概念网上有很多很好的讲解,见下面的推荐教程


分类问题是神经网络比较经典的应用场景,比较简单的是minisit的手写数字分类,分为10类,数据集可网上下载

一些实用的教程  https://morvanzhou.github.io/tutorials/machine-learning/torch/

 https://zhuanlan.zhihu.com/p/26649126

http://pytorch.apachecn.org/cn/tutorials/


1.前提参数设置

#coding=utf-8

import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision      # 数据库模块,用来加载、处理数据
import matplotlib.pyplot as plt
from torchvision import transforms, utils

torch.manual_seed(1)    # reproducible

# Hyper Parameters
EPOCH = 40           # 训练整批数据多少次, 为了节约时间, 我们只训练40次,一个epoch将所有数据跑遍
BATCH_SIZE = 20      # 数据数量=batch-size*iteration
LR = 0.001          # 学习率
DOWNLOAD_MNIST = True  # 如果你已经下载好了mnist数据就写上 Fasle

2.加载数据

train_data = torchvision.datasets.MNIST(
    root='./mnist/',
    train=True,                                     # this is training data
    transform=torchvision.transforms.ToTensor(),    # Converts a PIL.Image or numpy.ndarray to
                                                    # torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
    download=DOWNLOAD_MNIST,
)

print(train_data.train_data.size())                 # (60000, 28, 28)
print(train_data.train_labels.size())               # (60000)

# Data Loader for easy mini-batch return in training, the image batch shape will be (50, 1, 28, 28)
train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)

#test与train类似,把train=true换成False就行


3.搭建CNN网络

这是pytroch最方便的地方

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Sequential(  # input shape (1, 28, 28)
            nn.Conv2d(
                in_channels=1,      # input height,这里数据集是单通道,一般的图片这里设置成3
                out_channels=16,    # n_filters
                kernel_size=5,      # filter size
                stride=1,           # filter movement/step
                padding=2,      # 如果想要 con2d 出来的图片长宽没有变化, padding=(kernel_size-1)/2 当 stride=1
            ),      # output shape (16, 28, 28)
            nn.ReLU(),    # activation
            nn.MaxPool2d(kernel_size=2),    # 在 2x2 空间里向下采样, output shape (16, 14, 14)
        )
        self.conv2 = nn.Sequential(  # input shape (16, 14, 14)
            nn.Conv2d(16, 32, 5, 1, 2),  # output shape (32, 14, 14)
            nn.ReLU(),  # activation
            nn.MaxPool2d(2),  # output shape (32, 7, 7)
        )
        self.out = nn.Linear(32 * 7 * 7, 10)   # fully connected layer, output 10 classes,最后类别可以修改为你想要的数量

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size(0), -1)   # 展平多维的卷积图成 (batch_size, 32 * 7 * 7)
        output = self.out(x)
        return output

cnn = CNN()
print(cnn)  # net architecture
"""
CNN (
  (conv1): Sequential (
    (0): Conv2d(1, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (1): ReLU ()
    (2): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
  )
  (conv2): Sequential (
    (0): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (1): ReLU ()
    (2): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
  )
  (out): Linear (1568 -> 10)
)
"""

只要网络参数设置正确,就行了


4.训练

optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)   # optimize all cnn parameters
loss_func = nn.CrossEntropyLoss()                       # the target label is not one-hotted


for epoch in range(EPOCH):
    for step, (x, y) in enumerate(train_loader):   # gives batch data, normalize x when iterate train_loader
        b_x = Variable(x)   # batch x
        b_y = Variable(y)   # batch y

        output = cnn(b_x)[0]               # cnn output
        loss = loss_func(output, b_y)   # cross entropy loss
        train_loss += loss.data[0]
        pred = torch.max(out, 1)[1]
        train_correct = (pred == batch_y).sum()
        train_acc += train_correct.data[0]
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    print('Train Loss: {:.6f}, Acc: {:.6f}'.format(train_loss / (len(
        train_data)), train_acc / (len(train_data))))


    # evaluation--------------------------------
    cnn.eval()
    eval_loss = 0.
    eval_acc = 0.
    for batch_x, batch_y in test_loader:
        batch_x, batch_y = Variable(batch_x), Variable(batch_y)
        
        out = cnn(batch_x)
        loss = loss_func(out, batch_y)
        eval_loss += loss.data[0]
        pred = torch.max(out, 1)[1]
        num_correct = (pred == batch_y).sum()
        eval_acc += num_correct.data[0]
        print(eval_acc)
    print('Test Loss: {:.6f}, Acc: {:.6f}'.format(eval_loss / (len(
        test_data)), eval_acc / (len(test_data))))
    
 

loss 计算方式采用的softmax loss,网上介绍很多  https://blog.csdn.net/u014380165/article/details/77284921               https://blog.csdn.net/zhangxb35/article/details/72464152?utm_source=itdadao&utm_medium=referra

注意的是out为未经softmax激活的值,使用交叉熵损失函数无需在out后加softmax层,详见上述网址

pred的输出格式讲解,https://www.jianshu.com/p/e4c7b3eb8f3d    比如一批次4张图片,那么out为4*10,取每行最大一个值的下标为预测类别,计算的是top-1 ACC


至此,一个简单的cnn搭建教程介绍完成了~


你可能感兴趣的:(pytroc,pytroch,cnn)