错误:customWinogradConvActLayer.cpp

customWinogradConvActLayer.cpp:159: std::unique_ptr nvinfer1::cudnn::WinogradConvActLayer::createConvolution(const nvinfer1::cudnn::CommonContext&, bool, const int8_t*) const: Assertion `configIsValid(context)' failed.

It turns out that the first computer had a NVIDIA 1080 Ti GPU and the engine had been created for it. The second computer had a NVIDIA K80 GPU. Though, TensorRT documentation is vague about this, it seems like an engine created on a specific GPU can only be used for inference on the same model of GPU!

When I created a plan file on the K80 computer, inference worked fine.

Tried with: TensorRT 2.1, cuDNN 6.0 and CUDA 8.0

原因:

1,驱动重新安装之后,CUDA/CUDNN/TENSORRT没有重新安装.

2,每个系列的GPU,要安装相应的版本.比如2080必须CUDA 10

你可能感兴趣的:(CUDA/TensorRT)