Skip to content

Colaboratory

junji hashimoto edited this page Aug 1, 2019 · 12 revisions

Overview

ffi-experimental with GPU(CUDA10) runs on google-colaboratory. The setup takes 15 minutes. This recipe is useful to test some algorithms with GPU.

Setup

You can use ffi-experimental on google-colaboratory. Try following steps.

  1. open colaboratory
  2. change runtime to GPU.
  3. run following codes.
  4. wait 15 minutes.
import sys
import os 
os.chdir('/content')
os.environ["LD_LIBRARY_PATH"] += ":/content/ffi-experimental/deps/libtorch/lib:/content/ffi-experimental/deps/mklml/lib"
os.environ["PATH"] += ":/opt/ghc/bin"

!git clone --recursive https://github.com/hasktorch/ffi-experimental.git
!sudo apt -y --allow-downgrades --allow-remove-essential --allow-change-held-packages install locales software-properties-common apt-transport-https
!sudo add-apt-repository -y ppa:hvr/ghc
!sudo apt-get update -qq && apt-get -y --allow-downgrades --allow-remove-essential --allow-change-held-packages install build-essential zlib1g-dev liblapack-dev libblas-dev ghc-8.6.5 cabal-install-head devscripts

os.chdir('/content/ffi-experimental')
sys.path.append('/root/.local/bin')
!cd deps ; ./get-deps.sh -a cu100 -c 
!./setup-cabal.sh
!cabal new-update
!cabal new-build all

Repl

When you run !cabal new-repl hasktorch, ghci's prompt is opened. Try some haskell codes.

> !cabal new-repl hasktorch
Build profile: -w ghc-8.6.5 -O1
In order, the following will be built (use -v for more details):
 - hasktorch-0.2.0.0 (lib) (ephemeral targets)
Preprocessing library for hasktorch-0.2.0.0..
GHCi, version 8.6.5: http://www.haskell.org/ghc/  :? for help
[ 1 of 12] Compiling Torch.Backend    ( src/Torch/Backend.hs, interpreted )
[ 2 of 12] Compiling Torch.DType      ( src/Torch/DType.hs, interpreted )
[ 3 of 12] Compiling Torch.Layout     ( src/Torch/Layout.hs, interpreted )
[ 4 of 12] Compiling Torch.Scalar     ( src/Torch/Scalar.hs, interpreted )
[ 5 of 12] Compiling Torch.TensorOptions ( src/Torch/TensorOptions.hs, interpreted )
[ 6 of 12] Compiling Torch.Tensor     ( src/Torch/Tensor.hs, interpreted )
[ 7 of 12] Compiling Torch.TensorFactories ( src/Torch/TensorFactories.hs, interpreted )
[ 8 of 12] Compiling Torch.Functions  ( src/Torch/Functions.hs, interpreted )
[ 9 of 12] Compiling Torch.Static     ( src/Torch/Static.hs, interpreted )
[10 of 12] Compiling Torch.Autograd   ( src/Torch/Autograd.hs, interpreted )
[11 of 12] Compiling Torch.NN         ( src/Torch/NN.hs, interpreted )
[12 of 12] Compiling Torch            ( src/Torch.hs, interpreted )
Ok, 12 modules loaded.
*Torch> t=asTensor ([0..7] :: [Double])
*Torch> t
Tensor Double [8] [ 0.0000,  1.0000   ,  2.0000   ,  3.0000   ,  4.0000   ,  5.0000   ,  6.0000   ,  7.0000   ]
*Torch> toCUDA(t)
Tensor Double [8] [ 0.0000,  1.0000   ,  2.0000   ,  3.0000   ,  4.0000   ,  5.0000   ,  6.0000   ,  7.0000   ]
*Torch> tc=toCUDA(t)
*Torch> toCPU(tc*tc)
Tensor Double [8] [ 0.0000,  1.0000   ,  4.0000   ,  9.0000   ,  16.0000   ,  25.0000   ,  36.0000   ,  49.0000   ]

Issues

  1. How do we draw some graphs?