RuntimeError: one of the variables needed for gradient computation has been modified by an inplace

【Pytorch】RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 512, 8, 8]], which is output 0 of ReluBackward1, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
这个报错问题是在运行vgg16时出现的,将代码nn.ReLU(inplace=True)中的inplace=True去掉后,代码运行正常,inplace默认为False。但在运行AlexNet时inplace=True代码未报错,以前运行vgg16时也未报错,还不知道这个问题出现的根源在哪,先记录下这个问题,便于以后能快速查找解决问题的方法。

nn.ReLU(inplace=True),

改为

nn.ReLU(),

即可

参考博客:https://www.cnblogs.com/yunshangyue71/p/13294999.html

你可能感兴趣的:(报错问题,深度学习,pytorch,神经网络)