Skip to content

Commit 39c183d

Browse files
committed
README: reworked upcoming milestones, prepare for release
1 parent fb31801 commit 39c183d

File tree

1 file changed

+25
-15
lines changed

1 file changed

+25
-15
lines changed

README.md

Lines changed: 25 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ OCANNL is sponsored by [Ahrefs](https://ocaml.org/success-stories/peta-byte-scal
3636

3737
## Usage
3838

39-
Starting from OCANNL 0.5.2, the CUDA backend requires at least CUDA version 12.8.
39+
Starting from OCANNL 0.5.2, the CUDA backend requires at least CUDA version 12.8. The Metal backend requires at least MSL version 3.1.
4040

4141
[API documentation entry point](https://ahrefs.github.io/ocannl/dev/).
4242

@@ -64,30 +64,33 @@ NOTE: debug logging from CUDA in complex settings is a bit tricky, it involves a
6464

6565
This is very tentative.
6666

67-
* 0.6: more precisions, initialization, counter-based randomness, convolution, block tensors, improvements to dimension labels.
68-
* DONE: BF16, FP8.
69-
* DONE: Extended expressivity of projections and the generalized einsum notation to cover strided iteration and convolution.
70-
* DONE: Parameter initialization on devices.
71-
* TODO: New syntax for inline parameter definitions; record-based syntax instead of string-based.
72-
* TODO: counter-based randomness via threefry.
73-
* 0.6.1:
67+
* **0.6.1: convolution NNs, transformers.**
68+
* Counter-based randomness via threefry, second pass (pointwise and weak-but-efficient variants); normal distribution operation.
69+
* Padding inference during shape inference.
70+
* New syntax for inline parameter definitions; record-based syntax instead of string-based.
7471
* Add convnet building blocks and corresponding examples starting with MNIST.
7572
* Add transformer building blocks.
76-
* Integrate with huggingface-tokenizers
73+
* Integrate with huggingface-tokenizers.
7774
* Add a GPT-2 style example, ideally benchmarkable against [llm.c](https://github.com/karpathy/llm.c).
78-
* 0.6.2:
79-
* Verify or rethink usefulness of dimension labels, and whether to introduce axis labels.
75+
* **0.6.2: shape understanding and manipulation enhancements.**
76+
* Verify or rethink usefulness of dimension labels aka. dimension units, and whether to introduce axis labels.
8077
* Add concatenation to the einsum syntax (an axis that isq a concatenation of two axes each from another tensor); it's a generalization of stacking tensors.
81-
* 0.7: Optimize performance -- low hanging fruit.
78+
* **0.7: CPU-style performance and memory efficiency.**
79+
* Milestone phrasing: Enhancements for: inlining-related and simplification-related optimizations, memory management, session management.
80+
* **0.7.1: HIP backend (AMD hardware).**
81+
* **0.8: GPU-style performance -- low hanging fruit.**
8282
* First harvested from [Fast Multidimensional Matrix Multiplication on CPU from Scratch](https://siboehm.com/articles/22/Fast-MMM-on-CPU).
8383
* Then harvested from [How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog](https://siboehm.com/articles/22/CUDA-MMM).
8484
* Finally from [llm.c](https://github.com/karpathy/llm.c).
8585
* These will either require splitting a routine into multiple kernels, or implementing the megakernel approach.
86-
* 0.8: Optimize performance: program search.
86+
* Milestone phrasing: GPU tiling and related optimizations in the polyhedral style, with heuristic syntactic metrics for now.
87+
* **0.8.1: WebGPU backend.**
88+
* **0.9: Optimize performance: program search.**
8789
* Instead of dynamic scheduling as in tinygrad, we can schedule statically by program search.
8890
* We should also reproduce the search that tinygrad is doing.
8991
* Check which optimizations are missing against the implementation of [llm.c](https://github.com/karpathy/llm.c).
90-
* 1.0: Few documentation gaps, some degree of feature completeness, ergonomics, safety.
92+
* Milestone phrasing: Program search with execution-based per-backend or aggregate-of-backends cost functions. Starting with augmenting the tiling and layout mechanisms from v0.8 with cost functions, progressing to a broader range of code graph rewriting rules.
93+
* **1.0: Few documentation gaps, some degree of feature completeness, ergonomics, safety.**
9194
* Feature completeness demonstrated by resolving / implementing a few of the $\color{green}{\text{explore}}$ issues.
9295
* Concise syntax for transfers into the merge buffer since we know which tensor node is transferred and where to.
9396
* Similarly to how contexts track initialization dependencies for compilation, we should also track them for execution.
@@ -96,7 +99,14 @@ This is very tentative.
9699

97100
For more details, see [CHANGES](CHANGES.md).
98101

102+
* **0.6: more precisions, initialization, counter-based randomness, strided iteration.**
103+
* BF16, FP8.
104+
* Extended expressivity of projections and the generalized einsum notation to cover strided iteration and convolution.
105+
* Parameter initialization on devices.
106+
* Counter-based randomness via threefry, first pass (vectorized and cryptographic strength).
107+
* Better precision inference, including top-down propagation.
99108
* **0.5.3: Apple Metal backend.**
109+
* Also, CUDA backend works on native Windows.
100110
* **0.5.2: More primitive operations.**
101111
* Supports a lot of primitive operations (including ternary ops), and ternary tensor operations.
102112
* `%cd` and `%op` support both curried and uncurried operator application syntax.
@@ -152,4 +162,4 @@ The dependency on `cudajit` and `metal` is optional, so you have to install them
152162

153163
## Development
154164

155-
NOTE TO POTENTIAL CONTRIBUTORS: while I am slowly starting to work with PRs in separate branches rather than just a stream of commits on the main branch, design migrations will be broken into small PRs to avoid main (master) branch staleness; and many changes will still be commits on the main branch. We allow for failing tests on the main branch, although going forward this should be happening less for unit tests. Tagged i.e. released versions of the code are guaranteed to work as well as the given stage of the project permitted, the policy is that all tests must pass for releases.
165+
NOTE TO POTENTIAL CONTRIBUTORS: while I am slowly starting to work with PRs in separate branches rather than just a stream of commits on the main branch, design migrations will be broken into small PRs to avoid main (master) branch staleness; and many changes will still be commits on the main branch. We allow for failing tests on the main branch, although going forward this would hopefully be happening less. Tagged i.e. released versions of the code are guaranteed to work as well as the given stage of the project permitted, the policy is that all tests must pass for releases with the backend `sync_cc` and must have the behavior excpected of a backend with all other backends. We try to minimize discrepancy across backends but prefer more stringent tests even if some backends only pass them "in spirit" rather than with exact expectations of the `sync_cc` backend.

0 commit comments

Comments
 (0)