Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/mratsim/Arraymancer
Browse files Browse the repository at this point in the history
  • Loading branch information
mratsim committed May 5, 2018
2 parents 6ee69b1 + f5f434f commit 36e922f
Show file tree
Hide file tree
Showing 16 changed files with 279 additions and 1,024 deletions.
63 changes: 37 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ Arraymancer requires a BLAS and Lapack library.

## Full documentation

Detailed API is available on Arraymancer official [documentation](https://mratsim.github.io/Arraymancer/).
Detailed API is available at Arraymancer official [documentation](https://mratsim.github.io/Arraymancer/).

## Features

Expand Down Expand Up @@ -433,31 +433,42 @@ Epoch is: 0
Tensors, CudaTensors and CLTensors do not have the same features implemented yet.
Also CudaTensors and CLTensors can only be float32 or float64 while Cpu Tensor can be integers, string, boolean or any custom object.

Here is a comparative table, not that this feature set is developing very rapidly.

| Action | Tensor | CudaTensor |
| ------ | ------ | ---------- |
| Accessing tensor properties |[x]|[x]|
| Tensor creation |[x]| by converting a cpu Tensor|
| Accessing or modifying a single value |[x]|[]|
| Iterating on a Tensor |[x]|[]|
| Slicing a Tensor |[x]|[x]|
| Slice mutation `a[1,_] = 10` |[x]|[]|
| Comparison `==`|[x]| Coming soon|
| Element-wise basic operations|[x]|[x]|
| Universal functions |[x]|[x]|
| Automatically broadcasted operations |[x]| Coming soon|
| Matrix-Matrix and Matrix-Vector multiplication|[x]|[x] Note that sliced CudaTensors must explicitly be made contiguous for the moment|
| Displaying a tensor |[x]|[x]|
| Higher-order functions (map, apply, reduce, fold)|[x]| Apply, but only internally|
| Transposing | [x] | [x] |
| Converting to contiguous | [x] | [x] |
| Reshaping |[x] | [] |
| Explicit broadcast | [x] | Coming soon |
| Permuting dimensions | [x]| Coming soon |
| Concatenating tensors along existing dimension | [x]|[]|
| Squeezing singleton dimension |[x]| Coming soon|
| Slicing + squeezing |[x] | Coming soon |
Here is a comparative table of the core features, not that this feature set is developing
rapidly.

| Action | Tensor | CudaTensor | ClTensor |
| ------------------------------------------------- | --------------------------- | -------------------------- | -------------------------- |
| Accessing tensor properties | [x] | [x] | [x] |
| Tensor creation | [x] | by converting a cpu Tensor | by converting a cpu Tensor |
| Accessing or modifying a single value | [x] | [] | [] |
| Iterating on a Tensor | [x] | [] | [] |
| Slicing a Tensor | [x] | [x] | [x] |
| Slice mutation `a[1,_] = 10` | [x] | [] | [] |
| Comparison `==` | [x] | [] | [] |
| Element-wise basic operations | [x] | [x] | [x] |
| Universal functions | [x] | [] | [] |
| Automatically broadcasted operations | [x] | [x] | [x] |
| Matrix-Matrix and Matrix-Vector multiplication | [x] | [x] | [x] |
| Displaying a tensor | [x] | [x] | [x] |
| Higher-order functions (map, apply, reduce, fold) | [x] | internal only | internal only |
| Transposing | [x] | [x] | [] |
| Converting to contiguous | [x] | [x] | [] |
| Reshaping | [x] | [x] | [] |
| Explicit broadcast | [x] | [x] | [x] |
| Permuting dimensions | [x] | [] | [] |
| Concatenating tensors along existing dimension | [x] | [] | [] |
| Squeezing singleton dimension | [x] | [x] | [] |
| Slicing + squeezing | [x] | [] | [] |

Advanced features built upon this are:
- Neural networks: Dense and Convolutional neural networks are supported on CPU. Primitives are available on Cuda.
- Linear algebra: Least squares solver and eigenvalue decomposition for symmetric matrices.
- Machine Learning: Accuracy score, common loss function (MAE, MSE, ...), Principal Component Analysis (PCA).
- Statistics: Covariance matrix.
- IO & Datasets: CSV reading and writing, and reading MNIST files.
- A tensor plotting tool using Python matplotlib.

Detailed API is available at Arraymancer [documentation](https://mratsim.github.io/Arraymancer/).

### Speed

Expand Down
48 changes: 42 additions & 6 deletions arraymancer.nimble
Original file line number Diff line number Diff line change
Expand Up @@ -177,18 +177,18 @@ task test_release, "Run all tests - Release mode":
test "tests_cpu"

task gen_doc, "Generate Arraymancer documentation":
switch("define", "doc")

# TODO: Industrialize: something more robust that only check nim files (and not .DS_Store ...)
for filePath in listFiles("src/tensor/"):
let modName = filePath[11..^5] # Removing src/tensor/ (11 chars) and .nim (4 chars) # TODO: something more robust
if modName[^4..^1] != "cuda": # Cuda doc is broken https://github.com/nim-lang/Nim/issues/6910
exec r"nim doc -o:docs/build/tensor." & modName & ".html " & filePath
# Cuda doc is broken https://github.com/nim-lang/Nim/issues/6910
# Delete doc comment from nimcuda before using this
exec r"nim doc -o:docs/build/tensor." & modName & ".html " & filePath

for filePath in listFiles("src/nn_primitives/"):
let modName = filePath[18..^5] # Removing src/nn_primitives/ (18 chars) and .nim (4 chars) # TODO: something more robust
if modName[^5..^1] != "cudnn": # Cuda doc is broken https://github.com/nim-lang/Nim/issues/6910
exec r"nim doc -o:docs/build/nnp." & modName & ".html " & filePath
# Cuda doc is broken https://github.com/nim-lang/Nim/issues/6910
# Delete doc comment from nimcuda before using this
exec r"nim doc -o:docs/build/nnp." & modName & ".html " & filePath

for filePath in listFiles("src/autograd/"):
let modName = filePath[13..^5] # Removing src/autograd/ (13 chars) and .nim (4 chars) # TODO: something more robust
Expand All @@ -215,8 +215,44 @@ task gen_doc, "Generate Arraymancer documentation":
let modName = filePath[18..^5]
exec r"nim doc -o:docs/build/nn_optimizers." & modName & ".html " & filePath

for filePath in listFiles("src/nn/shapeshifting/"):
let modName = filePath[21..^5]
exec r"nim doc -o:docs/build/nn_optimizers." & modName & ".html " & filePath

for filePath in listFiles("src/nn_dsl/"):
let modName = filePath[11..^5]
exec r"nim doc -o:docs/build/nn_dsl." & modName & ".html " & filePath

for filePath in listFiles("src/linear_algebra/"):
let modName = filePath[19..^5]
exec r"nim doc -o:docs/build/la." & modName & ".html " & filePath

for filePath in listFiles("src/stats/"):
let modName = filePath[10..^5]
exec r"nim doc -o:docs/build/stats." & modName & ".html " & filePath

for filePath in listFiles("src/ml/dimensionality_reduction/"):
let modName = filePath[32..^5]
exec r"nim doc -o:docs/build/ml." & modName & ".html " & filePath

for filePath in listFiles("src/ml/metrics/"):
let modName = filePath[15..^5]
exec r"nim doc -o:docs/build/ml." & modName & ".html " & filePath

for filePath in listFiles("src/io/"):
let modName = filePath[7..^5]
exec r"nim doc -o:docs/build/io." & modName & ".html " & filePath

for filePath in listFiles("src/datasets/"):
let modName = filePath[13..^5]
exec r"nim doc -o:docs/build/datasets." & modName & ".html " & filePath

# Process the rst
for filePath in listFiles("docs/"):
if filePath[^4..^1] == ".rst":
let modName = filePath[5..^5]
exec r"nim rst2html -o:docs/build/" & modName & ".html " & filePath

# Copy stylesheets
cpFile("docs/docutils.css", "docs/build/docutils.css")
cpFile("docs/nav.css", "docs/build/nav.css")
48 changes: 47 additions & 1 deletion changelog.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,49 @@
Arraymancer v0.4.0 May 05 2018 "The Name of the Wind"
=====================================================

Changes:

- Core:
- OpenCL tensors are now available! However Arraymancer will naively select the first backend available. It can be CPU, it can be GPU. They support basic and broadcasted operations (Addition, matrix multiplication, elementwise multiplication, ...)
- Addition of an `argmax` and `argmax_max` procs.

- Datasets:
- Loading the MNIST dataset from http://yann.lecun.com/exdb/mnist/
- Reading and writing from CSV

- Linear algebra:
- Least squares solver
- Eigenvalues and eigenvectors decomposition for symmetric matrices

- Machine Learning
- Principal Component Analysis (PCA)

- Statistics
- Computation of covariance matrices

- Neural network
- Introduction of a short intuitive syntax to build neural networks! (A blend of Keras and PyTorch).
- Maxpool2D layer
- Mean Squared Error loss
- Tanh and softmax activation functions

- Examples and tutorials
- Digit recognition using Convolutional Neural Net
- Teaching Fizzbuzz to a neural network

- Tooling
- Plotting tensors through Python

Several updates linked to Nim rapid development and several bugfixes.

Thanks:
- Bluenote10 for the CSV writing proc and the tensor plotting tool
- Miran for benchmarking
- Manguluka for tanh
- Vindaar for bugfixing
- Every participants in RFCs
- And you user of the library.

Arraymancer v0.3.0 Dec. 14 2017 "Wizard's First Rule"
=====================================================

Expand Down Expand Up @@ -89,7 +135,7 @@ Without further ado:
- Slicing (read-only) is supported
- Transforming a slice to a new contiguous Tensor is supported
- Tensors
- Introduction of `unsafe` operations that works without copy: `unsafeTranspose`, `unsafeReshape`, `unsafebroadcast`, `unsafeBroadcast2`, `unsafeContiguous`,
- Introduction of `unsafe` operations that works without copy: `unsafeTranspose`, `unsafeReshape`, `unsafebroadcast`, `unsafeBroadcast2`, `unsafeContiguous`,
- Implicit broadcasting via `.+, .*, ./, .-` and their in-place equivalent `.+=, .-=, .*=, ./=`
- Several shapeshifting operations: `squeeze`, `at` and their `unsafe` version.
- New property: `size`
Expand Down
Loading

0 comments on commit 36e922f

Please sign in to comment.