CLASS torch.nn.Linear(in_features, out_features, bias=True)
输入:
(N, *, in_features),*代表可以拓展到任意大的维度
输出:
(N, *, out_features), *含义是输出的除了out_features这个唯独外,所有的维度的形状都要和输入相同
对输入的n维数据进行softmax操作,对其进行尺度变换,使得n维数据的大小均位于[0,1]区间内并且加和为1.
其中,Softmax操作函数如下:
Softmax ( x i ) = e x p ( x i ) ∑ j exp ( x j ) \text{Softmax}(x_i)=\frac{exp(x_i)}{\sum_j\text{exp}(x_j)} Softmax(xi)=∑jexp(xj)exp(xi)
当输入Tensor稀疏时未指定的值会被当做-inf
进行处理。
输入输出:
返回:
import torch
import torch.nn as nn
x=torch.randn(2,3)*10
print(x)
y=nn.Softmax(dim=1)
print(y(x))
输出:
tensor([[ -6.2684, -0.5306, 1.0101],
[ 5.2088, -10.6058, -0.6508]])
tensor([[5.6812e-04, 1.7632e-01, 8.2311e-01],
[9.9716e-01, 1.3506e-07, 2.8443e-03]])
CLASS torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
CLASS torch.nn.ReLU(inplace=False)
参数:
CLASS torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode=‘zeros’)
输入输出形状:
关于dilation的解释(上面图是没有dilation,下面图有dilation):
CLASS torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
输入输出形状:
CLASS torch.nn.Module:
pytorch所有神经网络的基类,如果要自己实现神经网络,也需要以这个类为基类。
返回一个遍历神经网络所有模块的迭代器,如果网络中使用了完全相同的模块,那么这个模块只会被返回一次。
import torch.nn as nn
l=nn.Linear(2,2)
net=nn.Sequential(l,l,l)
for idx, m in enumerate(net.modules()):
print(idx, ":",m)
输出:
0 : Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
(2): Linear(in_features=2, out_features=2, bias=True)
)
1 : Linear(in_features=2, out_features=2, bias=True)
import torch.nn as nn
l1=nn.Linear(2,2)
l2=nn.Linear(2,2)
l3=nn.Linear(2,2)
net=nn.Sequential(l1,l2,l3)
for idx, m in enumerate(net.modules()):
print(idx, ":",m)
输出:
0 : Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
(2): Linear(in_features=2, out_features=2, bias=True)
)
1 : Linear(in_features=2, out_features=2, bias=True)
2 : Linear(in_features=2, out_features=2, bias=True)
3 : Linear(in_features=2, out_features=2, bias=True)
import torch.nn as nn
class mywork(nn.Module):
def __init__(self):
super(mywork,self).__init__()
self.construct_net()
def construct_net(self):
self.con1=nn.Conv2d(1,2,1)
self.conv2=nn.Conv2d(2,3,1)
def forward(self,x):
x=self.conv1(x)
x=self.conv2(x)
return x
test=mywork()
for name,param in test.named_parameters():
print(name)
print(param)
输出:
con1.weight
Parameter containing:
tensor([[[[0.4649]]],[[[0.2869]]]], requires_grad=True)
con1.bias
Parameter containing:
tensor([ 0.8755, -0.4610], requires_grad=True)
conv2.weight
Parameter containing:
tensor([[[[-0.6527]],[[-0.0048]]],
[[[-0.5545]],[[-0.4138]]],
[[[ 0.6046]],[[-0.1250]]]], requires_grad=True)
conv2.bias
Parameter containing:
tensor([-0.0304, 0.3682, -0.3856], requires_grad=True)
函数原型:
torch.nn.functional.linear(input, weight, bias=None)
函数原型:
torch.nn.functional.interpolate(
input,
size=None,
scale_factor=None,
mode='nearest',
align_corners=None,
recompute_scale_factor=None,
antialias=False
)
降采样或上采样输入到给定的size
或者scale_factor
用于插值的算法由mode
决定。
Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.
输入维度以如下方式进行理解: mini-batch × channels × [optional depth] × [optional height] × width \text{mini-batch}\times\text{channels}\times\text{[optional depth]}\times\text{[optional height]}\times\text{width} mini-batch×channels×[optional depth]×[optional height]×width
可用的插值算法(即mode
)包括:nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area, nearestexact
参数: