An OpenCL backend for torch.
Component | Status | Examples of what works now |
require 'cltorch' | works | require 'cltorch' |
Device information | works | print('num devices:', cltorch.getDeviceCount()) props = cltorch.getDeviceProperties(1) |
torch.ClStorage | works | c = torch.ClStorage() c = torch.ClStorage(3) c[1] = 5 c = torch.ClStorage{4,9,2} c:fill(7) a = torch.Storage{1.5, 2.4, 5.3} c:copy(a) c[2] = 21 a:copy(c) d = torch.ClStorage(3) d:copy(c) |
conversion to/from ClTensor | works | c = torch.ClTensor{7,4,5} c = torch.ClTensor(3,2) c = torch.Tensor{2,6,9}:cl() b = c:float() c = torch.ClTensor{{3,1,6},{2.1,5.2,3.9}} b:copy(c) c:copy(b) d = torch.ClTensor(2,3) d:copy(c) c[1][2] = 2.123 |
Construction or extraction functions | Started | c:fill(1.345) c:zero() print(torch.ClTensor.zeros(torch.ClTensor.new(), 3, 5)) print(torch.ClTensor.ones(torch.ClTensor.new(), 3, 5)) |
Element-wise operations | Done | c:abs() for _,name in ipairs({'log','exp', 'cos', 'acos', 'sin', 'asin', 'atan', 'tanh', 'ceil', 'floor', 'abs', 'round'}) do loadstring('c:' .. name .. '()')() end |
basic operations | 50% done | d = torch.ClTensor{{3,5,-2},{2.1,2.2,3.9}} c = torch.ClTensor{{4,2,-1},{3.1,1.2,4.9}} c:add(d) c:cmul(d) c:cdiv(d) c:add(3) c:mul(3) c:div(2) c = torch.add(c,3) c = torch.mul(c, 4) c = torch.div(c, 3) torch.pow(2,c) c:pow(2) torch.cpow(c,d) torch.cdiv(c,d) torch.pow(c,2) torch.clamp(c, 50, 100) c:clamp(50, 100) -c |
Overloaded operators | 80% done | d = torch.ClTensor{{3,5,-2},{2.1,2.2,3.9}} c = torch.ClTensor{{4,2,-1},{3.1,1.2,4.9}} c = c + d c = c - d c = c / 2 c = c * 1.5 c = c + 4 c = c - 5 |
Logical operations | Done | d = torch.ClTensor{{3,5,-2},{2.1,2.2,3.9}} c = torch.ClTensor{{4,2,-1},{3.1,1.2,4.9}} for _,name in ipairs({'lt','le','gt','ge','ne','eq'}) do print(loadstring('return c:' .. name .. '(5)')()) end for _,name in ipairs({'lt','le','gt','ge','ne','eq'}) do print(loadstring('return torch.' .. name .. '(c,d)')()) end |
- First install torch distro, see https://github.com/torch/distro.
- Now, git clone the cltorch distro, cd into it, and run:
luarocks make rocks/cltorch-scm-1.rockspec
Porting status by file, compared with original cutorch files. Note that .cpp
here could have been ported from .c
, .cpp
, or .cu
.
File | Migration status |
---|---|
THClTensorMathCompare.cpp | Done |
THClTensormathCompareT.cpp | Done |
THClTensorMathPairwise.cpp | Done |
THClTensor.h | Done |
THClTensorCopy.h | Done |
THClTensorMath.h | Done |
THClTensor.cpp | 90% |
THClTensorCopy.cpp | 50% |
THClTensorMath.cpp | 5% |
THClTensorIndex.cpp | 0% |
THClTensorMath2.cpp | 20% |
THClTensorMathBlas.cpp | 30% |
THClBlas.cpp | 50% |
cltorch has the following build dependencies:
- lua 5.1 libraries - used for runtime Kernel templating
- clBLAS - provides GPU-based matrix operations, such as multiplication
- EasyCL - provides an abstraction layer over the low-level OpenCL API
- clew - similar to glew, means that cltorch can be loaded without any OpenCL library/runtime being present
At runtime, if you want to call any of the cltorch methods, you will also need:
- OpenCL-compatible GPU
- OpenCL library/driver (normally provided by the GPU vendor)