Pytorch

目录

    • 一、pytorch环境创建
    • 二、jupyter配置
      • pytorch加载数据
    • 三、实战
      • 1、Dataset代码实战
      • 2、Tensorboard的使用
      • 3、tensorboard
      • 4、图像变换,transform的使用
      • 5、常见的transforms
      • 6、torchvision中的数据集使用
      • 7、dataloader的使用
    • 四、神经网络
      • 1、神经网络基本骨架--nn.Module的使用
      • 2、卷积操作
      • 3、卷积层的使用(Convolutional Layer)
      • 4、最大池化的使用(MaxPooling)
      • 5、非线性激活(Nonlinear activation)
      • 6、线性层(linear)
      • 7、序列化(Sequential)
      • 8、损失函数与反向传播(loss function)
      • 9、优化器(optimizer)
    • 五、模型的使用
      • 1、现有模型的使用及更改
      • 2、网络模型的保存与加载
      • 3、完整的模型训练套路(以CIFAR10数据集)
      • 4、训练模型套路二
      • 5、GPU训练
      • 6、完整模型验证

一、pytorch环境创建

创建一个pytorch环境n后面环境名字,python环境版本

conda create -n pytorch python=3.11

激活环境

conda activate pytorch

查看工具包

pip list

安装pytorch,我天是真的慢

conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

Pytorch_第1张图片
Pytorch_第2张图片
验证是否安装成功,是否可以使用gpu

(pytorch) E:\deep_learning>python
import torch
torch.cuda.is_available()
True

二、jupyter配置

安装jupyter

(pytorch) E:\deep_learning>conda install nb_conda

换成了,可以了

conda install -n pytorch ipykernel

jupyter notebook终于能运行pytorch环境了呜呜呜。。。
Pytorch_第3张图片

遇到了问题:
转载:23年最新版pycharm找不到conda可执行文件解决办法

终于成了,好难
Pytorch_第4张图片

dir():打开,看见
help():说明书
dir(toch)

Pytorch_第5张图片

dir(torch.cuda)
dir(torch.cuda.is_available())
help(torch.cuda.is_available)
Help on function is_available in module torch.cuda:
is_available() -> bool
    Returns a bool indicating if CUDA is currently available.

Pytorch_第6张图片

pytorch加载数据

Pytorch_第7张图片

三、实战

1、Dataset代码实战

import os.path

from torch.utils.data import Dataset
from PIL import Image
class MyData(Dataset):
    # 路径设置
    def __init__(self,root_dir,label_dir):
        # 给这个类赋值的变量,可以成为这个类中的全局变量
        self.root_dir=root_dir
        self.label_dir=label_dir
        # 图片路径地址--路径拼接
        self.path=os.path.join(self.root_dir,self.label_dir)
        # 图片所有列表
        self.image_path=os.listdir(self.path)

    # 返回图片路径
    def __getitem__(self, idx):
        # 读取对应一张图片
        img_name=self.image_path[idx]
        # 图片的相对路径
        img_item_path=os.path.join(self.root_dir,self.label_dir,img_name)
        # 打开该路径的图片
        img=Image.open(img_item_path)
        label=self.label_dir
        return img,label
    # 数据长度
    def __len__(self):
        return len(self.image_path)

root_dir="dataset/train"
ants_label_dir="ants_image"
bees_label_dir="bees_image"
ants_dataset=MyData(root_dir,ants_label_dir)
bees_dataset=MyData(root_dir,bees_label_dir)

# 拼接成总集合
train_dataset=ants_dataset+bees_dataset

2、Tensorboard的使用

pytorch环境安装tensorboard

(pytorch) C:\Users\Dell>pip install tensorboard

打开logs,logdir=事件文件所在文件夹名

(pytorch) E:\deep_pytorch>tensorboard --logdir=pytorch/transform/logs

更改端口

(pytorch) E:\deep_pytorch>tensorboard --logdir=logs --port=6008

在这里插入图片描述

http://localhost:6006/
Pytorch_第8张图片

from  torch.utils.tensorboard import  SummaryWriter

writer=SummaryWriter("logs")

# writer.add_image()
# y=x
for i in range(100):
    writer.add_scalar("y=x",i,i)
"""
tag(str):数据标识符
scalar_value(float或string/blobname):要保存的值x
global_step(int):要记录的全局步长值y
"""
writer.close()

3、tensorboard

from  torch.utils.tensorboard import  SummaryWriter
import numpy as np
from PIL import Image
writer=SummaryWriter("logs")

img_path="E:\deep_pytorch\dataset\\train\\ants_image\\0013035.jpg"
img_PIL=Image.open(img_path)

img_array=np.array(img_PIL)
writer.add_image("test",img_array,1,dataformats='HWC')
"""
tag(str):数据标识符
img_tensor(torch.tensor、numpy.ndarray或string/blobname):图像数据
global_step(int):要记录的全局步长值
"""
# y=x
for i in range(100):
    writer.add_scalar("y=3x",3*i,i)
"""
tag(str):数据标识符
scalar_value(float或string/blobname):要保存的值x
global_step(int):要记录的全局步长值y
"""
writer.close()

Pytorch_第9张图片

4、图像变换,transform的使用

Pytorch_第10张图片

from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
from PIL import Image
import cv2
# python的用法-》tensor数据类型
# 通过transforms.ToTensor去解决两个问题

img_path="E:\deep_pytorch\dataset\\train\\bees_image\90179376_abc234e5f4.jpg"
img=Image.open(img_path)

writer=SummaryWriter("logs")

# 1、transforms该如何使用
tensor_trans=transforms.ToTensor()
# img图片转为tensor数据类型
tensor_img=tensor_trans(img)

# 2、为什么需要Tensor数据类型
writer.add_image("Tensor_img2",tensor_img)
writer.close()

Pytorch_第11张图片

5、常见的transforms

from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

writer=SummaryWriter("logs")

img=Image.open("E:\deep_pytorch\dataset\\train\\ants_image\\0013035.jpg")
# totensor的使用
trans_totensor=transforms.ToTensor()
img_totensor=trans_totensor(img)

writer.add_image("to_tensor",img_totensor)

# Normalize归一化
print(img_totensor[0][0][0])
trans_norm=transforms.Normalize([1,3,5],[1,3,5])
img_norm=trans_norm(img_totensor)
print(img_norm[0][0][0])
writer.add_image("img_normalize",img_norm,2)
writer.close()

Pytorch_第12张图片

from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

writer=SummaryWriter("logs")

img=Image.open("E:\deep_pytorch\dataset\\train\\ants_image\\0013035.jpg")
# totensor的使用
trans_totensor=transforms.ToTensor()
img_totensor=trans_totensor(img)

writer.add_image("to_tensor",img_totensor)

# Normalize归一化
print(img_totensor[0][0][0])
trans_norm=transforms.Normalize([1,3,5],[1,3,5])
img_norm=trans_norm(img_totensor)
print(img_norm[0][0][0])
writer.add_image("img_normalize",img_norm)

# Resize缩放
print(img.size)
trans_resize=transforms.Resize((512,512))
#img PIL-->resize-->img_resize PIL
img_resize=trans_resize(img)
#img_resize PIL -->trans_totensor --> totesnor
img_resize=trans_totensor(img_resize)
writer.add_image("resize",img_resize)


writer.close()

Pytorch_第13张图片

# Compose
trans_resize_2=transforms.Resize(256)
trans_compose=transforms.Compose([trans_resize_2,trans_totensor])
img_resize_2=trans_compose(img)
writer.add_image("Compose",img_resize_2,1)

6、torchvision中的数据集使用

https://pytorch.org/vision/0.9/datasets.html#cifar

Pytorch_第14张图片

import torchvision
from torch.utils.tensorboard import SummaryWriter

# 将PIL数据类型转换为tensor数据类型
dataset_transform=torchvision.transforms.Compose([
    torchvision.transforms.ToTensor(),
])
# transform=dataset_transform将数据集里的每一张图片转换为tensor类型
train_set=torchvision.datasets.CIFAR10(root="./dataset",train=True,transform=dataset_transform,download=True)
test_set=torchvision.datasets.CIFAR10(root="./dataset",train=False,transform=dataset_transform,download=True)

# # 第一张测试图片
# print(test_set[0])
# # 测试集的类别
# print(test_set.classes)
# # 图片信息img,target
# img,target=test_set[0]
# print(img)
# print(target)
# # 显示图片
# img.show()
writer=SummaryWriter("p1")
# 输出训练集前十张图片
for i in range(10):
    img,target=test_set[i]
    writer.add_image("test_set",img,i)
# 关闭读写
writer.close()

Pytorch_第15张图片

7、dataloader的使用

import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

# 准备的测试数据集
test_data=torchvision.datasets.CIFAR10("../torch/dataset",train=False,transform=torchvision.transforms.ToTensor())
# batch_size每页64张图片进行一次叠加,shuffle=True每次打乱叠加的数据,drop_last=true把最后不足64张的舍去
test_loader=DataLoader(dataset=test_data,batch_size=64,shuffle=True,num_workers=0,drop_last=True)

# 测试数据集中第一张图片及target
img,target=test_data[0]
print(img.shape)
print(target)

writer=SummaryWriter("dataloader")
step=0
for data in test_loader:
    imgs,targets=data
    # print(imgs.shape)
    # print(targets)
    writer.add_images("test_data_four",imgs,step)
    step=step+1
writer.close()

Pytorch_第16张图片

四、神经网络

1、神经网络基本骨架–nn.Module的使用

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Pytorch_第17张图片

import torch
from torch import nn


class liu(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, input):
        output = input+1
        return  output
l=liu()
x=torch.tensor(1.0)
output=l(x)
print(output)


Pytorch_第18张图片

2、卷积操作

Pytorch_第19张图片

Pytorch_第20张图片

import torch
import torch.nn.functional as F
# 输入图像
input=torch.tensor([[1,2,0,3,1],
                    [0,1,2,3,1],
                    [1,2,1,0,0],
                    [5,2,3,1,1],
                    [2,1,0,1,1]])

# 卷积核
kernel=torch.tensor([[1,2,1],
                     [0,1,0],
                     [2,1,0]])
# 输入要求 input tensor of shape (minibatch, in_channels, iW)
# 原本torch.Size([5, 5])  torch.Size([3, 3])
# 进行格式转变
# 1 batch size=1,channel=1,5*5
input=torch.reshape(input,(1,1,5,5))
kernel=torch.reshape(kernel,(1,1,3,3))

print(input.shape)
print(kernel.shape)

output=F.conv2d(input,kernel,stride=1)
print(output)
# padding 周围填充,为了保存有效权重值
output2=F.conv2d(input,kernel,stride=1,padding=1)
print(output2)

Pytorch_第21张图片

3、卷积层的使用(Convolutional Layer)

CONV2D

CLASS torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)

Pytorch_第22张图片
Pytorch_第23张图片

import torch
import torchvision
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

# 路径,测试数据集,数据类型变换,
dataset=torchvision.datasets.CIFAR10("../../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)

dataloader=DataLoader(dataset,batch_size=64)

class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.conv1=Conv2d(in_channels=3,out_channels=6,kernel_size=3,stride=1,padding=0)

    def forward(self,x):
        x=self.conv1(x)
        return x
l=liu()
print(l)
writer=SummaryWriter("../logs")
step=0
for data in dataloader:
    imgs,target=data
    output=l(imgs)
    print(f"older{imgs.shape}")
    print(f"new{output.shape}")
    # oldertorch.Size([64, 3, 32, 32])
    writer.add_images("input",imgs,step)
    # newtorch.Size([64, 6, 30, 30])
    # 需要3个channel--》[xxx,3,30,30]
    output=torch.reshape(output,(-1,3,30,30))
    writer.add_images("output",output,step)

    step=step+1
writer.close()

Pytorch_第24张图片

4、最大池化的使用(MaxPooling)

保留数据的特征的同时,需要减少数据总量

torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Pytorch_第25张图片
Pytorch_第26张图片

import torch
import torchvision.datasets
from torch import nn
from torch.nn import MaxPool2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset=torchvision.datasets.CIFAR10("../../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)

dataloader=DataLoader(dataset,batch_size=64)
"""
# 输入图像
input=torch.tensor([[1,2,0,3,1],
                    [0,1,2,3,1],
                    [1,2,1,0,0],
                    [5,2,3,1,1],
                    [2,1,0,1,1]],dtype=torch.float32)

# 类型转换
input=torch.reshape(input,(-1,1,5,5))
print(input)
"""

class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        # ceil_mode=False,向下取整
        self.maxpool1=MaxPool2d(kernel_size=3,ceil_mode=True)

    def forward(self,input):
        output=self.maxpool1(input)
        return output

l=liu()
writer=SummaryWriter("../logs_writer")
step=0
for data in dataloader:
    imgs,target=data
    output=l(imgs)
    writer.add_images("input250",imgs,step)
    writer.add_images("output250",output,step)
    step=step+1
writer.close()

Pytorch_第27张图片
Pytorch_第28张图片

5、非线性激活(Nonlinear activation)

RELU
Pytorch_第29张图片
SIGMOID
Pytorch_第30张图片

import torch
from torch import nn
from torch.nn import ReLU

input=torch.tensor([[1,-0.5],
                    [-1,3]])

input=torch.reshape(input,(-1,1,2,2))
print(input)

class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.relu1=ReLU()

    def forward(self,input):
        output=self.relu1(input)
        return output

l=liu()
output=l(input)
print(output)

Pytorch_第31张图片

import torch
import torchvision.datasets
from torch import nn
from torch.nn import ReLU, Sigmoid
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

# 测试数据集
dataset=torchvision.datasets.CIFAR10("../../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)
# 64一组
dataloader=DataLoader(dataset,batch_size=64)
"""
# 输入数据集
input=torch.tensor([[1,-0.5],
                    [-1,3]])

input=torch.reshape(input,(-1,1,2,2))
print(input)
"""
# 激活函数relu
class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.relu1=ReLU()
        self.sigmoid1=Sigmoid()

    def forward(self,input):
        output=self.sigmoid1(input)
        return output

l=liu()
"""
output=l(input)
print(output)
"""
# 创建一个tensorboard文件
writer=SummaryWriter("logs_relu")
step=0
# 将测试集进行激活
for data in dataloader:
    imgs,target=data
    output=l(imgs)
    writer.add_images("one",imgs,step)
    writer.add_images("two",output,step)

    step=step+1

writer.close()



Pytorch_第32张图片

6、线性层(linear)

Pytorch_第33张图片
Pytorch_第34张图片

import torch
import torchvision
from PIL.GimpGradientFile import linear
from torch import nn
from torch.utils.data import DataLoader

test_data=torchvision.datasets.CIFAR10("../torch/dataset",train=False,transform=torchvision.transforms.ToTensor())

test_loader=DataLoader(dataset=test_data,batch_size=64)

class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.linear1=linear(196608,10)

    def forward(self,input):
        output=self.linear1(input)
        return output


for data in test_loader:
    imgs,targets=data
    # 展平
    output=torch.flatten(imgs)
    print(imgs.shape)
    print(output.shape)



7、序列化(Sequential)

model = nn.Sequential(
          nn.Conv2d(1,20,5),
          nn.ReLU(),
          nn.Conv2d(20,64,5),
          nn.ReLU()
        )

# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
          ('conv1', nn.Conv2d(1,20,5)),
          ('relu1', nn.ReLU()),
          ('conv2', nn.Conv2d(20,64,5)),
          ('relu2', nn.ReLU())
        ]))

Pytorch_第35张图片
Pytorch_第36张图片

import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.tensorboard import SummaryWriter


class liu(nn.Module):
    def __init__(self):
        """
        self.conv1=Conv2d(3,32,5,padding=2)
        self.maxpool1=MaxPool2d(2)
        self.conv2=Conv2d(32,32,5,padding=2)
        self.maxpool2=MaxPool2d(2)
        self.conv3=Conv2d(32,64,5,padding=2)
        self.maxpool3=MaxPool2d(2)

        self.flatten=Flatten()
        self.linear1=Linear(1024,64)
        self.linear2=Linear(64,10)
        """
        super(liu,self).__init__()
        self.mdoel1=Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        """
        x=self.conv1(x)
        x=self.maxpool1(x)
        x=self.conv2(x)
        x=self.maxpool2(x)
        x=self.conv3(x)
        x=self.maxpool3(x)

        x=self.flatten(x)
        x=self.linear1(x)
        x=self.linear2(x)
        :param x:
        :return:
        """
        x=self.mdoel1(x)
        return x
l=liu()
# 图片格式
input=torch.ones((64,3,32,32))
output=l(input)
print(output.shape)

writer=SummaryWriter("logs")
# add_graph???
writer.add_graph(l,input)
writer.close()

在这里插入图片描述
Pytorch_第37张图片

8、损失函数与反向传播(loss function)

Pytorch_第38张图片

from torch.nn import L1Loss, MSELoss

import torch

inputs=torch.tensor([1,2,3],dtype=torch.float32)
targets=torch.tensor([1,2,5],dtype=torch.float32)

inputs=torch.reshape(inputs,(1,1,1,3))

targets=torch.reshape(targets,(1,1,1,3))

loss=L1Loss()
result=loss(inputs,targets)

loss_mse=MSELoss()
result_mse=loss_mse(inputs,targets)

print(result)
print(result_mse)

在这里插入图片描述
Pytorch_第39张图片

from torch import nn
from torch.nn import L1Loss, MSELoss

import torch

inputs=torch.tensor([1,2,3],dtype=torch.float32)
targets=torch.tensor([1,2,5],dtype=torch.float32)

inputs=torch.reshape(inputs,(1,1,1,3))

targets=torch.reshape(targets,(1,1,1,3))

loss=L1Loss()
result=loss(inputs,targets)

loss_mse=MSELoss()
result_mse=loss_mse(inputs,targets)

print(result)
print(result_mse)

# output
x=torch.tensor([0.1,0.2,0.3])
# 命中
y=torch.tensor([1])
# 1batsize,3类
x=torch.reshape(x,(1,3))
# 交叉商
loss_cross=nn.CrossEntropyLoss()
result_coss=loss_cross(x,y)
print(result_coss)

在这里插入图片描述

9、优化器(optimizer)

要构建Optimizer,您必须给它一个包含要优化的参数(所有参数都应该是变量s)的迭代表。然后,您可以指定优化器特定的选项,如学习率、权重衰减等。

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam([var1, var2], lr=0.0001)

这是大多数优化器支持的简化版本。一旦使用例如backward()计算梯度,就可以调用该函数。

for input, target in dataset:
    optimizer.zero_grad()
    output = model(input)
    loss = loss_fn(output, target)
    loss.backward()
    optimizer.step()
import torch
import torchvision
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.data import DataLoader

# 测试数据集
dataset=torchvision.datasets.CIFAR10("../../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)
# 64一组
dataloader=DataLoader(dataset,batch_size=64)


class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.mdoel1=Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        x=self.mdoel1(x)
        return x
l=liu()
# 损失函数
loss=nn.CrossEntropyLoss()
# 优化器--param模型参数,lr学习速率
optim=torch.optim.SGD(l.parameters(),lr=0.01)
for epoch in range(20):
    # 误差和
    running_loss=0.0
    for data in dataloader:
        imgs,targets=data
        outputs=l(imgs)
        result_loss=loss(outputs,targets)
        # 每一个网络模型梯度为0
        optim.zero_grad()
        # 调用损失函数反向传播求出每个节点的梯度
        result_loss.backward()
        # 对每个模型参数进行调优
        optim.step()
        running_loss=running_loss+result_loss
    print(running_loss)

Pytorch_第40张图片

五、模型的使用

1、现有模型的使用及更改

vgg
Pytorch_第41张图片
imagenet
Pytorch_第42张图片
在这里插入图片描述

import torchvision.datasets
from torch import nn

# train_data=torchvision.datasets.ImageNet("../../dataset",split='train',
#                                          transform=torchvision.transforms.ToTensor(),download=True)
# 加载网络模型
vgg16_false=torchvision.models.vgg16(pretrained=False)
# 下载网络模型
vgg16_true=torchvision.models.vgg16(pretrained=True)

print(vgg16_true)

train_data=torchvision.datasets.CIFAR10("../../dataset",train=True,transform=torchvision.transforms.ToTensor(),download=True)

# 增加现有模型结构
vgg16_true.classifier.add_module("add_linear",nn.Linear(1000,10))
print(vgg16_true)

# 修改现有模型结构
vgg16_false.classifier[6]=nn.Linear(4096,10)
print(vgg16_false)

增加模型结构为了适应新的数据集CIFAR10,将原本输出类别1000,通过线性输出的类别为10
Pytorch_第43张图片
修改模型结构为了适应新的数据集CIFAR10,输出的类别为10
Pytorch_第44张图片

2、网络模型的保存与加载

import torch
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear

vgg16=torchvision.models.vgg16(pretrained=False)

# 保存方式1(模型结构+模型参数)
torch.save(vgg16,"vgg16_method1.pth")

# 保存方式2(模型参数)字典数据
torch.save(vgg16.state_dict(),"vgg16_method2.pth")

# 圈套模型
class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.mdoel1=Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        x=self.mdoel1(x)
        return x

l=liu()
torch.save(l,"l.pth")

Pytorch_第45张图片


import torch
import torchvision.models
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear

# 加载模型方式1
model1=torch.load("vgg16_method1.pth")
print(model1)

# 加载模型方式2,将字典数据重新转换为模型
vgg16=torchvision.models.vgg16(pretrained=False)
vgg16.load_state_dict(torch.load("vgg16_method2.pth"))
# model2=torch.load("vgg16_method2.pth")
print(vgg16)

# 加载圈套
class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.mdoel1=Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        x=self.mdoel1(x)
        return x
model3=torch.load("l.pth")
print(model3)

model2保存的字典形式需要重新转换为模型参数形式
Pytorch_第46张图片

3、完整的模型训练套路(以CIFAR10数据集)

model.py

from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
import torch
# 搭建神经网络
class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.mdoel1=Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        x=self.mdoel1(x)
        return x

if __name__ == '__main__':
    # 测试网络正确性
    l=liu()
    # 64张图片3通道32*32大小
    input=torch.ones((64,3,32,32))
    output=l(input)
    print(output.shape)

tain.py

import nn
import torchvision.datasets
from torch.utils.data import DataLoader
from model import *
# 准备数据集
train_data=torchvision.datasets.CIFAR10("../dataset",train=True,transform=torchvision.transforms.ToTensor(),download=True)
# 测试数据集
test_data=torchvision.datasets.CIFAR10("../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)

# len长度
train_data_len=len(train_data)
test_data_len=len(test_data)
print("训练数据集长度:{}".format(train_data_len))
print("测试数据集长度:{}".format(test_data_len))

# 利用dataloader加载数据集
train_dataloader=DataLoader(train_data,batch_size=64)
test_dataloader=DataLoader(test_data,batch_size=64)

# 创建网络模型
l=liu()

# 损失函数
loss_fn=nn.CrossEntropyLoss()

# 优化器--lr学习速率---1e-2=1 * (10)^-2 =1/100=0.01
optimizer=torch.optim.SGD(l.parameters(),lr=1e-2)

# 设置训练网络的一些参数
# 记录训练的次数
total_train_step=0
# 记录测试的次数
total_test_step=0
# 训练的轮数
epoch=10

for i in range(epoch):
    print("-----第{}轮训练开始-----".format(i+1))
    # 训练步骤开始
    for data in train_dataloader:
        imgs,targets=data
        outputs=l(imgs)
        # 期望-输出和
        loss=loss_fn(outputs,targets)
        # 梯度清零
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 优化参数
        optimizer.step()
        # 优化次数
        total_train_step=total_train_step+1
        print("训练次数:{},Loss{}".format(total_train_step,loss.item()))

Pytorch_第47张图片

4、训练模型套路二

import nn
from torch.utils.tensorboard import SummaryWriter

import torch
import torchvision.datasets
from torch.utils.data import DataLoader
from model import *
# 准备数据集
train_data=torchvision.datasets.CIFAR10("../dataset",train=True,transform=torchvision.transforms.ToTensor(),download=True)
# 测试数据集
test_data=torchvision.datasets.CIFAR10("../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)

# len长度
train_data_len=len(train_data)
test_data_len=len(test_data)
print("训练数据集长度:{}".format(test_data_len))
print("测试数据集长度:{}".format(train_data_len))

# 利用dataloader加载数据集
train_dataloader=DataLoader(test_data,batch_size=64)
test_dataloader=DataLoader(train_data,batch_size=64)

# 创建网络模型
l=liu()

# 损失函数
loss_fn=nn.CrossEntropyLoss()

# 优化器--lr学习速率---1e-2=1 * (10)^-2 =1/100=0.01
optimizer=torch.optim.SGD(l.parameters(),lr=1e-2)

# 设置训练网络的一些参数
# 记录训练的次数
total_train_step=0
# 记录测试的次数
total_test_step=0
# 训练的轮数
epoch=10
# 添加tensorboard
writer=SummaryWriter("logs_train")

for i in range(epoch):
    print("-----第{}轮训练开始-----".format(i+1))
    # 训练步骤开始
    for data in train_dataloader:
        imgs,targets=data
        outputs=l(imgs)
        # 期望-输出和
        loss=loss_fn(outputs,targets)
        # 梯度清零
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 优化参数
        optimizer.step()
        # 优化次数
        total_train_step=total_train_step+1
        if total_train_step%100==0:
            print("训练次数:{},Loss--{}".format(total_train_step,loss.item()))
            writer.add_scalar("train_loss",loss.item(),total_train_step)

    # 测试步骤开始
    total_test_loss=0
    # 测试的正确率
    total_accuracy=0
    with torch.no_grad():
        for data in test_dataloader:
            imgs,targets=data
            outputs=l(imgs)
            loss=loss_fn(outputs,targets)
            total_test_loss=total_test_loss+loss.item()
            # 预测正确的个数
            accuracy=(outputs.argmax(1) == targets).sum()
            total_accuracy=total_accuracy+accuracy

    print("整体测试集的loss{}".format(total_test_loss))
    print("整体测试集上的正确率{}".format(total_accuracy/test_data_len))
    writer.add_scalar("test_loss",total_test_loss,total_test_step)
    writer.add_scalar("accuracy",total_accuracy/test_data_len,total_test_step)
    total_test_step=total_test_step+1

    # 保存模型
    torch.save(l,"l_model{}.pth".format(i))
    print("模型已经保存")

writer.close()

Pytorch_第48张图片

Pytorch_第49张图片
Pytorch_第50张图片
Pytorch_第51张图片

5、GPU训练

Pytorch_第52张图片

from torch import nn
from keras import Sequential
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.tensorboard import SummaryWriter
import torch
import torchvision.datasets
from torch.utils.data import DataLoader
import time
# 准备数据集
train_data=torchvision.datasets.CIFAR10("../dataset",train=True,transform=torchvision.transforms.ToTensor(),download=True)
# 测试数据集
test_data=torchvision.datasets.CIFAR10("../dataset",train=False,transform=torchvision.transforms.ToTensor(),download=True)

# len长度
train_data_len=len(train_data)
test_data_len=len(test_data)
print("训练数据集长度:{}".format(train_data_len))
print("测试数据集长度:{}".format(test_data_len))

# 利用dataloader加载数据集
train_dataloader=DataLoader(train_data,batch_size=64)
test_dataloader=DataLoader(test_data,batch_size=64)

# 创建网络模型
class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.mdoel1=nn.Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        x=self.mdoel1(x)
        return x
l=liu()
# 使用gpu训练
if torch.cuda.is_available():
    l=l.cuda()
# 损失函数
loss_fn=nn.CrossEntropyLoss()
# gpu
if torch.cuda.is_available():
    loss_fn=loss_fn.cuda()
# 优化器--lr学习速率---1e-2=1 * (10)^-2 =1/100=0.01
optimizer=torch.optim.SGD(l.parameters(),lr=1e-2)

# 设置训练网络的一些参数
# 记录训练的次数
total_train_step=0
# 记录测试的次数
total_test_step=0
# 训练的轮数
epoch=10
# 添加tensorboard
writer=SummaryWriter("logs_train")

# 开始时间
start_time=time.time()
for i in range(epoch):
    print("-----第{}轮训练开始-----".format(i+1))
    # 训练步骤开始
    for data in train_dataloader:
        imgs,targets=data
        # gpu
        if torch.cuda.is_available():
            imgs=imgs.cuda()
            targets=targets.cuda()
        outputs=l(imgs)
        # 期望-输出和
        loss=loss_fn(outputs,targets)
        # 梯度清零
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 优化参数
        optimizer.step()
        # 优化次数
        total_train_step=total_train_step+1
        if total_train_step%100==0:
            end_time=time.time()
            print(end_time-start_time)
            print("训练次数:{},Loss--{}".format(total_train_step,loss.item()))
            writer.add_scalar("train_loss",loss.item(),total_train_step)

    # 测试步骤开始
    total_test_loss=0
    # 测试的正确率
    total_accuracy=0
    with torch.no_grad():
        for data in test_dataloader:
            imgs,targets=data
            # gpu
            if torch.cuda.is_available():
                imgs = imgs.cuda()
                targets = targets.cuda()
            outputs=l(imgs)
            loss=loss_fn(outputs,targets)
            total_test_loss=total_test_loss+loss.item()
            # 预测正确的个数
            accuracy=(outputs.argmax(1) == targets).sum()
            total_accuracy=total_accuracy+accuracy

    print("整体测试集的loss{}".format(total_test_loss))
    print("整体测试集上的正确率{}".format(total_accuracy/test_data_len))
    writer.add_scalar("test_loss",total_test_loss,total_test_step)
    writer.add_scalar("accuracy",total_accuracy/test_data_len,total_test_step)
    total_test_step=total_test_step+1

    # 保存模型
    torch.save(l,"l_model{}.pth".format(i))
    print("模型已经保存")

writer.close()

Pytorch_第53张图片
可以定义训练的设备,用到时直接用即可

# 定义训练的设备
device =torch.device("cuda" if torch.cuda.is_available() else "cpu")
l=liu()
# 使用gpu训练
l=l.to(device)
# 损失函数
loss_fn=nn.CrossEntropyLoss()
# gpu
loss_fn=loss_fn.to(device)

6、完整模型验证

import torch
import torchvision.transforms
from PIL import Image
from keras import Sequential
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear

# 图片路径
image_path="../imgs/img.png"
# 读取图片
image=Image.open(image_path)
print(image)
# 因为网络模型结构是输入32×32原图片是800×500,所以需要转换
transform=torchvision.transforms.Compose([torchvision.transforms.Resize((32,32)),
                                          torchvision.transforms.ToTensor()])
# 图片大小转换
image=transform(image)
print(image.shape)
# 网络模型
class liu(nn.Module):
    def __init__(self):
        super(liu,self).__init__()
        self.mdoel1=Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self,x):
        x=self.mdoel1(x)
        return x
# 加载网络模型
# 模型对应到cpu上
model=torch.load("../GPU/l_model9.pth",map_location=torch.device('cpu'))
print(model)
# img 测试模型
# 将三维图片转换为四维
image=torch.reshape(image,(1,3,32,32))
# 把模型转换为测试类型
model.eval()
with torch.no_grad():
    output=model(image)
print(output)
# 预测最大类别
print(output.argmax(1))

Pytorch_第54张图片

你可能感兴趣的:(深度学习/MatLab,pytorch,人工智能,python)