pytorch/tensorflow-篇--判断GPU和Cuda是否可用

文章目录

  • pytorch
  • tensorflow

pytorch

import torch
flag = torch.cuda.is_available()
print(flag)

ngpu= 1
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
print(device)
print(torch.cuda.get_device_name(0))
print(torch.rand(3,3).cuda()) 

tensorflow

方法1

import tensorflow as tf
if tf.test.gpu_device_name():
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
    print("Please install GPU version of TF")

方法2

import tensorflow as tf

tf = tf.Session(config=tf.ConfigProto(log_device_placement=True))
tf.list_devices()

方法3

import tensorflow as tf

tf.test.gpu_device_name()

方法4

from tensorflow.python.client import device_lib 
device_lib.list_local_devices()

你可能感兴趣的:(tensorflow,pytorch,深度学习)