Pytorch使用Python,CPP,CUDA进行拓展

使用CPP 和CUDA进行扩展时面临着几个小问题:python与cpp的接口绑定方式, python与cpp的数据交互方式,cpp编译工具的选择。现在逐条回答:

数据接口

ATen: The foundational tensor and mathematical operation library on which all else is built;
ATen
ATen is fundamentally a tensor library, on top of which almost all other Python and C++ interfaces in PyTorch are built. It provides a core Tensor class, on which many hundreds of operations are defined. Most of these operations have both CPU and GPU implementations, to which the Tensor class will dynamically dispatch based on its type. A small example of using ATen could look as follows:

#include 

at::Tensor a = at::ones({2, 2}, at::kInt);
at::Tensor b = at::randn({2, 2});
auto c = a + b.to(at::kInt);

This Tensor class and all other symbols in ATen are found in the at:: namespace, documented here.

编译
使用setuptools, setup.py 示例如下

from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension

setup(
    name='linear_cpp',
    ext_modules=[
        CppExtension('linear_cpp', ['linear.cpp']),
    ],
    cmdclass={
        'build_ext': BuildExtension
    })

ext_moduleslist, 可以包含其他扩展,cmdclass为编译前对编译命令的一个预处理,可以自行定义,但没必要

C++ wraper
使用PYBIND11

PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
  m.def("forward", &linear_forward, "linear forward");
  m.def("backward", &linear_backword, "linear backward");
}

Reference

pytorch-parallel

THE C++ FRONTEND

你可能感兴趣的:(Python,Pytorch,CPP,CUDA)