基础实战——FashionMNIST时装分类

基础实战——FashionMNIST时装分类

首先导入必要的包

import os
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader

配置训练环境和超参数

Gpu_device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu')
## 配置其他超参数,如batch_size, num_workers, learning rate, 以及总的epochs
batch_size = 256
num_workers = 4   # 对于Windows用户,这里应设置为0,否则会出现多线程错误
lr = 1e-4
epochs = 20

数据读入和加载
这里同时展示两种方式:

  • 下载并使用PyTorch提供的内置数据集
  • 从网站下载以csv格式存储的数据,读入并转成预期的格式
    第一种数据读入方式只适用于常见的数据集,如MNIST,CIFAR10等,PyTorch官方提供了数据下载。这种方式往往适用于快速测试方法(比如测试下某个idea在MNIST数据集上是否有效)
    第二种数据读入方式需要自己构建Dataset,这对于PyTorch应用于自己的工作中十分重要

同时,还需要对数据进行必要的变换,比如说需要将图片统一为一致的大小,以便后续能够输入网络训练;需要将数据格式转为Tensor类,等等。

这些变换可以很方便地借助torchvision包来完成,这是PyTorch官方用于图像处理的工具库,上面提到的使用内置数据集的方式也要用到。PyTorch的一大方便之处就在于它是一整套“生态”,有着官方和第三方各个领域的支持。这些内容我们会在后续课程中详细介绍。

from torchvision import transforms

image_size = 28
data_transform = transforms.Compose([
    # transforms.ToPILImage(),# 是转换数据格式
    transforms.Resize(image_size),
    transforms.ToTensor()# 对于PILImage转化的Tensor,其数据类型是torch.FloatTensor
])
## 读取方式一:使用torchvision自带数据集,下载可能需要一段时间
from torchvision import datasets

train_data = datasets.FashionMNIST(root='./', train=True, download=True, transform=data_transform)
test_data = datasets.FashionMNIST(root='./', train=False, download=True, transform=data_transform)

在构建训练和测试数据集完成后,需要定义DataLoader类,以便在训练和测试时加载数据

train_loader = DataLoader(train_data, batch_size = batch_size, shuffle = True, num_workers = num_workers, drop_last = True)
# 第一个参数是读取数据集,第二个参数是将读取的样本分成几份,第三个参数是进程数,第四个设置为True即为丢掉最后不完整的批次
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=num_workers)

读入后,我们可以做一些数据可视化操作,主要是验证我们读入的数据是否正确

# 通过next(iter())可视化数据
import matplotlib.pyplot as plt
image, label = next(iter(train_loader))
print(image.shape, label.shape)
plt.imshow(image[0][0], cmap="gray")
torch.Size([256, 1, 28, 28]) torch.Size([256])







[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2oV6zUcq-1647324551013)(output_12_2.png)]

模型设计
由于任务较为简单,这里我们手搭一个CNN,而不考虑当下各种模型的复杂结构
模型构建完成后,将模型放到GPU上用于训练

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
  • in_channels:输入通道数

  • out_channels:卷积产生的通道数

  • kernel_size:卷积核尺寸

nn.MaxPool2d(kernel_size, stride, padding=1, dilation=1, return_indices=False, ceil_mode=False)
  • kernel_size:表示做最大池化的窗口大小,可以单值可以元组
  • stride:步长
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv = nn.Sequential(
            # channel:1 -> 32
            nn.Conv2d(1, 32, 5),# 二维卷积层
            # 32
            nn.ReLU(),# 激活函数
            nn.MaxPool2d(2, stride=2),
            nn.Dropout(0.3),# 防止过拟合,可能有0.3的可能不被激活该神经元
            nn.Conv2d(32, 64, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, stride=2),
            nn.Dropout(0.3)
        )
        self.fc = nn.Sequential(
            nn.Linear(64*4*4, 512),
            nn.ReLU(),
            nn.Linear(512, 10)
        )
        
    def forward(self, x):
        x = self.conv(x)
        x = x.view(-1, 64*4*4)
        x = self.fc(x)
        # x = nn.functional.normalize(x)
        return x

model = Net()
model = model.cuda()
# model = nn.DataParallel(model).cuda()   # 多卡训练时的写法,之后的课程中会进一步讲解

设定损失函数
使用torch.nn模块自带的CrossEntropy损失
PyTorch会自动把整数型的label转为one-hot型,用于计算CE loss
这里需要确保label是从0开始的,同时模型不加softmax层(使用logits计算),这也说明了PyTorch训练中各个部分不是独立的,需要通盘考虑

criterion = nn.CrossEntropyLoss()
# criterion = nn.CrossEntropyLoss(weight=[1,1,1,1,3,1,1,1,1,1])

设定优化器
这里我们使用Adam优化器

optimizer = optim.Adam(model.parameters(), lr=0.001)

训练和测试(验证)
各自封装成函数,方便后续调用
关注两者的主要区别:

  • 模型状态设置
  • 是否需要初始化优化器
  • 是否需要将loss传回到网络
  • 是否需要每步更新optimizer

此外,对于测试或验证过程,可以计算分类准确率

def train(epoch):
    model.train()
    train_loss = 0
    for data, label in train_loader:
        data, label = data.cuda(), label.cuda()
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, label)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()*data.size(0)
    train_loss = train_loss/len(train_loader.dataset)
    print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
def val(epoch):       
    model.eval()
    val_loss = 0
    gt_labels = []
    pred_labels = []
    with torch.no_grad():
        for data, label in test_loader:
            data, label = data.cuda(), label.cuda()
            output = model(data)
            preds = torch.argmax(output, 1)
            gt_labels.append(label.cpu().data.numpy())
            pred_labels.append(preds.cpu().data.numpy())
            loss = criterion(output, label)
            val_loss += loss.item()*data.size(0)
    val_loss = val_loss/len(test_loader.dataset)
    gt_labels, pred_labels = np.concatenate(gt_labels), np.concatenate(pred_labels)
    acc = np.sum(gt_labels==pred_labels)/len(pred_labels)
    print('Epoch: {} \tValidation Loss: {:.6f}, Accuracy: {:6f}'.format(epoch, val_loss, acc))
for epoch in range(1, epochs+1):
    train(epoch)
    val(epoch)
D:\Anaconda3\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  ..\c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)


Epoch: 1 	Training Loss: 0.688631
Epoch: 1 	Validation Loss: 0.464401, Accuracy: 0.831100
Epoch: 2 	Training Loss: 0.440354
Epoch: 2 	Validation Loss: 0.391231, Accuracy: 0.858500
Epoch: 3 	Training Loss: 0.370229
Epoch: 3 	Validation Loss: 0.334591, Accuracy: 0.881000
Epoch: 4 	Training Loss: 0.336309
Epoch: 4 	Validation Loss: 0.309734, Accuracy: 0.886000
Epoch: 5 	Training Loss: 0.309195
Epoch: 5 	Validation Loss: 0.300616, Accuracy: 0.885100
Epoch: 6 	Training Loss: 0.291201
Epoch: 6 	Validation Loss: 0.293780, Accuracy: 0.891800
Epoch: 7 	Training Loss: 0.281162
Epoch: 7 	Validation Loss: 0.275348, Accuracy: 0.900700
Epoch: 8 	Training Loss: 0.267369
Epoch: 8 	Validation Loss: 0.263158, Accuracy: 0.902200
Epoch: 9 	Training Loss: 0.252121
Epoch: 9 	Validation Loss: 0.257244, Accuracy: 0.906300
Epoch: 10 	Training Loss: 0.245798
Epoch: 10 	Validation Loss: 0.246164, Accuracy: 0.910700
Epoch: 11 	Training Loss: 0.236516
Epoch: 11 	Validation Loss: 0.241345, Accuracy: 0.912600
Epoch: 12 	Training Loss: 0.225927
Epoch: 12 	Validation Loss: 0.244102, Accuracy: 0.911100
Epoch: 13 	Training Loss: 0.217469
Epoch: 13 	Validation Loss: 0.242183, Accuracy: 0.911300
Epoch: 14 	Training Loss: 0.215015
Epoch: 14 	Validation Loss: 0.245351, Accuracy: 0.910500
Epoch: 15 	Training Loss: 0.205415
Epoch: 15 	Validation Loss: 0.235655, Accuracy: 0.913600
Epoch: 16 	Training Loss: 0.200848
Epoch: 16 	Validation Loss: 0.236427, Accuracy: 0.917100
Epoch: 17 	Training Loss: 0.193187
Epoch: 17 	Validation Loss: 0.228538, Accuracy: 0.917900
Epoch: 18 	Training Loss: 0.188149
Epoch: 18 	Validation Loss: 0.226993, Accuracy: 0.917000
Epoch: 19 	Training Loss: 0.184678
Epoch: 19 	Validation Loss: 0.223577, Accuracy: 0.919700
Epoch: 20 	Training Loss: 0.178555
Epoch: 20 	Validation Loss: 0.234853, Accuracy: 0.918500

你可能感兴趣的:(分类,pytorch,深度学习)