Skip to content

Commit ac11172

Browse files
committed
README milestones and test policy update.
1 parent 6683f26 commit ac11172

File tree

1 file changed

+17
-15
lines changed

1 file changed

+17
-15
lines changed

README.md

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
# ocannl
22

3-
NOTE TO POTENTIAL CONTRIBUTORS: reach out so I can adjust my work style -- start using branches for refactoring. Otherwise you face frustration as the code might be broken. Tagged versions of the code are guaranteed to work as well as the given stage of the project permitted.
4-
53
OCANNL is sponsored by [Ahrefs](https://ocaml.org/success-stories/peta-byte-scale-web-crawler)! [Visit the Ahrefs website.](https://ahrefs.com/)
64

75
## OCANNL -- OCaml Compiles Algorithms for Neural Networks Learning
@@ -66,11 +64,13 @@ NOTE: debug logging from CUDA in complex settings is a bit tricky, it involves a
6664

6765
This is very tentative.
6866

69-
* 0.6: more precisions, convolution, block tensors, improvements to dimension labels.
70-
* DONE at head: BF16, FP8.
71-
* Requires extending expressivity of projections and the generalized einsum notation.
72-
* Then, we can add convnet building blocks and corresponding examples starting with MNIST.
73-
* Verify or rethink usefulness of dimension labels, and whether to introduce axis labels.
67+
* 0.6: more precisions, initialization, counter-based randomness, convolution, block tensors, improvements to dimension labels.
68+
* DONE: BF16, FP8.
69+
* DONE: Extended expressivity of projections and the generalized einsum notation to cover strided iteration and convolution.
70+
* DONE: Parameter initialization on devices.
71+
* TODO: counter-based randomness via threefry.
72+
* TODO: Add convnet building blocks and corresponding examples starting with MNIST.
73+
* TODO: Verify or rethink usefulness of dimension labels, and whether to introduce axis labels.
7474
* 0.7: Replicate the scaffolding from [llm.c](https://github.com/karpathy/llm.c) for training GPT-2.
7575
* Useful building blocks for models in [lib/nn_blocks.ml](lib/nn_blocks.ml).
7676
* A language model example.
@@ -81,16 +81,14 @@ This is very tentative.
8181
* Then harvested from [How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog](https://siboehm.com/articles/22/CUDA-MMM).
8282
* Finally from [llm.c](https://github.com/karpathy/llm.c).
8383
* These will require splitting a routine into multiple CUDA kernels.
84-
* 0.9: A new abstraction layer automating compilation/linking, execution, and some data transfers.
85-
* E.g. host-device transfers: copy from host if host update is later than the previous device update.
86-
* Concise syntax for transfers into the merge buffer since we know which tensor node is transferred and where to.
87-
* At the end of 0.8.x, OCANNL has a REPL.
88-
* 0.10: Optimize performance: program search.
84+
* 0.9: Optimize performance: program search.
8985
* Instead of dynamic scheduling as in tinygrad, we can schedule statically by program search.
9086
* We should also reproduce the search that tinygrad is doing.
9187
* Check which optimizations are missing against the implementation of [llm.c](https://github.com/karpathy/llm.c).
92-
* 1.0: Few documentation gaps, some degree of feature completeness.
88+
* 1.0: Few documentation gaps, some degree of feature completeness, ergonomics, safety.
9389
* Feature completeness demonstrated by resolving / implementing a few of the $\color{green}{\text{explore}}$ issues.
90+
* Concise syntax for transfers into the merge buffer since we know which tensor node is transferred and where to.
91+
* Similarly to how contexts track initialization dependencies for compilation, we should also track them for execution.
9492

9593
### Releases
9694

@@ -130,7 +128,7 @@ For more details, see [CHANGES](CHANGES.md).
130128

131129
OCANNL follows different design choices than [OWL](https://ocaml.xyz/). For example:
132130

133-
* OCANNL is not functorized.
131+
* OCANNL is not functorized, except that it uses first-class modules for backends.
134132
* OCANNL has fewer abstraction layers.
135133
* OCANNL has a more powerful shape inference.
136134
* OCANNL only supports backpropagation, while OWL supports full forward and backward auto-diff.
@@ -148,4 +146,8 @@ OCANNL follows different design choices than [OWL](https://ocaml.xyz/). For exam
148146

149147
Although the project is called `ocannl`, the main package is called `neural_nets_lib`, to avoid the (opam linter's) complaint that the name can be confused with other packages. This also clarifies that `ocannl` is composed of `arrayjit` and `neural_nets_lib`.
150148

151-
The dependency on `ocaml-cudajit` is optional, so you have to install it first to enable the Cuda backend.
149+
The dependency on `cudajit` and `metal` is optional, so you have to install them first to enable the CUDA or Apple Metal backends.
150+
151+
## Development
152+
153+
NOTE TO POTENTIAL CONTRIBUTORS: while I am starting to work with PRs in separate branches rather than just a stream of commits on the main branch, design migrations will be broken into small PRs to avoid main (master) branch staleness. We allow for failing tests on the main branch, although going forward this should be happening less for unit tests. Tagged i.e. released versions of the code are guaranteed to work as well as the given stage of the project permitted, the policy is that all tests must pass for releases.

0 commit comments

Comments
 (0)