Skip to content

Commit 8335a27

Browse files
committed
Update upcoming milestones / releases
1 parent a46fb8f commit 8335a27

File tree

1 file changed

+13
-11
lines changed

1 file changed

+13
-11
lines changed

README.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -68,20 +68,22 @@ This is very tentative.
6868
* DONE: BF16, FP8.
6969
* DONE: Extended expressivity of projections and the generalized einsum notation to cover strided iteration and convolution.
7070
* DONE: Parameter initialization on devices.
71+
* TODO: New syntax for inline parameter definitions; record-based syntax instead of string-based.
7172
* TODO: counter-based randomness via threefry.
72-
* TODO: Add convnet building blocks and corresponding examples starting with MNIST.
73-
* TODO: Verify or rethink usefulness of dimension labels, and whether to introduce axis labels.
74-
* 0.7: Replicate the scaffolding from [llm.c](https://github.com/karpathy/llm.c) for training GPT-2.
75-
* Useful building blocks for models in [lib/nn_blocks.ml](lib/nn_blocks.ml).
76-
* A language model example.
77-
* Port (translate or bind) the Python files from [llm.c](https://github.com/karpathy/llm.c) to implement tokenization, data loading and saving etc.
78-
* At the end of 0.6.x, we should have an apples-to-apples benchmark comparing OCANNL to [llm.c](https://github.com/karpathy/llm.c) for both CPU and GPU.
79-
* 0.8: Optimize performance -- low hanging fruit.
73+
* 0.6.1:
74+
* Add convnet building blocks and corresponding examples starting with MNIST.
75+
* Add transformer building blocks.
76+
* Integrate with huggingface-tokenizers
77+
* Add a GPT-2 style example, ideally benchmarkable against [llm.c](https://github.com/karpathy/llm.c).
78+
* 0.6.2:
79+
* Verify or rethink usefulness of dimension labels, and whether to introduce axis labels.
80+
* Add concatenation to the einsum syntax (an axis that isq a concatenation of two axes each from another tensor); it's a generalization of stacking tensors.
81+
* 0.7: Optimize performance -- low hanging fruit.
8082
* First harvested from [Fast Multidimensional Matrix Multiplication on CPU from Scratch](https://siboehm.com/articles/22/Fast-MMM-on-CPU).
8183
* Then harvested from [How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog](https://siboehm.com/articles/22/CUDA-MMM).
8284
* Finally from [llm.c](https://github.com/karpathy/llm.c).
83-
* These will require splitting a routine into multiple CUDA kernels.
84-
* 0.9: Optimize performance: program search.
85+
* These will either require splitting a routine into multiple kernels, or implementing the megakernel approach.
86+
* 0.8: Optimize performance: program search.
8587
* Instead of dynamic scheduling as in tinygrad, we can schedule statically by program search.
8688
* We should also reproduce the search that tinygrad is doing.
8789
* Check which optimizations are missing against the implementation of [llm.c](https://github.com/karpathy/llm.c).
@@ -150,4 +152,4 @@ The dependency on `cudajit` and `metal` is optional, so you have to install them
150152

151153
## Development
152154

153-
NOTE TO POTENTIAL CONTRIBUTORS: while I am starting to work with PRs in separate branches rather than just a stream of commits on the main branch, design migrations will be broken into small PRs to avoid main (master) branch staleness. We allow for failing tests on the main branch, although going forward this should be happening less for unit tests. Tagged i.e. released versions of the code are guaranteed to work as well as the given stage of the project permitted, the policy is that all tests must pass for releases.
155+
NOTE TO POTENTIAL CONTRIBUTORS: while I am slowly starting to work with PRs in separate branches rather than just a stream of commits on the main branch, design migrations will be broken into small PRs to avoid main (master) branch staleness; and many changes will still be commits on the main branch. We allow for failing tests on the main branch, although going forward this should be happening less for unit tests. Tagged i.e. released versions of the code are guaranteed to work as well as the given stage of the project permitted, the policy is that all tests must pass for releases.

0 commit comments

Comments
 (0)