Skip to content

Latest commit

 

History

History
162 lines (128 loc) · 10.5 KB

CHANGELOG.md

File metadata and controls

162 lines (128 loc) · 10.5 KB

👨‍💻 Changelog

All notable changes to this project will be documented in this file.

[unreleased]

layer_seq: RoPESeq (124)
layer_seq: RMSNormSeq (123)
layer_seq: EmbeddingSeq (122)
🚀 perf: use half in Metal kernels (121)
🔨 refactor: handle float16 along float on GPU (#120)
🚀 perf: copy & generate weights faster (119)
🚀 perf: Convolution2D (118)
🪜 feat: LayerCAM2D -> VQGrad2D, LayerCAMSeq -> VQGradSeq (#117)
⚙️ core: GELU vs GELUApprox (113)
🚀 perf: QuerySelf & ValueSelf (112)
🚀 perf: benchmark ViT base model (111)
🐛 fix: run on Apple Silicon (110)
⚙️ core: initForward,Backward model API (109)
🪜 layer_1d: Dropout1D (#108)
🪜 feat: VQGrad, VQGradSeq (#107)

0.3.1 (2023-08-09)

Bug Fixes

🐛 fix: input GPU check (#106)

0.3.0 (2023-08-04)

Features

🪜 feat: BCE1D, BCE2D, VQ2D & VQSeq as losses (#101)
🪜 layer_seq: VQSeq (#100)
🪜 layer_2d: loosen range contraint in ColorJitterHSV (#98)
🪜 layer_2d: SimilarityError2D & dirty losses (#97)
🪜 layer_2d: ColorJitterHSV, Image & ImageTests (#93)
🪜 layer_2d: Flip2D & config_kernels (#92)
🪜 layer_2d: SimilarityBatchError2D (#88)
🪜 layer_2d: Normalize2D (#87)
🪜 layer_2d: SelfCorrelate2D (#86)
🪜 layer_2d: VQ2D (#81)
🪜 layer_seq: Adding new layer SelectNeuronsSeq (#77)
⚙️ core: GELU activation function (#73)
🪜 layer_seq: ValueSeq (#69)
🪜 layer_seq: SoftmaxSeq (#68)
🪜 layer_seq: QuerySeq (#67)
🪜 layer_seq: LayerNormSeq & LayerNormalization (#66)
🪜 layer_seq: FullyConnectedSeq (#65)
🪜 layer_seq: Constant12Seq & Constant2Seq (#64)
🪜 layer_seq: Concat1Seq & Concat2Seq (#63)
🪜 layer_seq: SumSeq (#62)
🪜 layer_2d: MSE2D & LayerOutput2D (#61)
🪜 layer_seq: FullyConnectedPatch & base classes (#60)
🪜 layer_2d: Constant2D (#56)
🪜 layer_2d: AdaIN (#55)
🪜 layer_2d: InstanceNorm2D & InstanceNormalization (#54)

Bug Fixes

🐛 layer_2d: align Convolution & Deconvolution on PyTorch (#84)
🐛 fix: numerical stability of tanh for GELU (#83)
🐛 fix: numerical instability of Softmax (#76)
🐛 fix: update ValueSeq operation (#72)

Miscellaneous Tasks

🔨 refactor: throwable init (#103)
🔨 refactor: dims checks for inputs and outputs (#102)
🔨 layer_2d: expose indices in VQ2D (#99)
🔨 core: LayerWeightInit (#96)
🚨 test: FlowAccumulateTrainer (#95)
🚨 examples: compare training with PyTorch (#94)
🔨 layer_2d: remove computeVQ (#91)
🔨 layer_2d: API for random transforms (#90)
🚀 perf: enhance Normalize122D with reduce (#89)
🚨 integration: resize alignment with PyTorch (#85)
🔨 layer_seq: SelectSeq (#82)
🚀 examples: AutoEncoder models (#79)
🚀 layer_seq: factorize by nbHeads (#78)
🚀 examples: make Transformer example very simple (#75)
🚀 examples: adding Transformer training example (#74)
🚨 integration: update & validate LayerNormSeq (#71)
🚨 integration: validate MultiHeadAttention & fix Softmax stability (#70)

0.2.0 (2023-02-27)

Features

🪜 layer_1d: Softmax1D, DotProduct1D & Constant1D (#49)
🪜 feat: remove activation from layer (#47)
🪜 feat: LayerMerge1D, Sum1D, Concat1D, Concat2D (#43)
🪜 layer_2d: Deconvolution2D (#42)
🪜 feat: getDeltaWeightsGPU per sample API (#41)

Bug Fixes

🐛 fix: use buffers for neuron selection in SelectNeurons1D (#50)
🐛 fix: model context max id (#45)
🐛 fix: remove error when data input may indicate lower batch size (#44)

Miscellaneous Tasks

📚 docs: change project description and add links (#57)
📚 docs: PropertyListEncoder by default (#51)
🎉 refactor: logo (#46)
🎉 refactor!: re brand the framework (#40)

0.1.1 (2022-12-16)

Features

🪜 layer_2d: ResizeBilinearCrop (#36)
🚀 perf: enhance backwardGPU for ResizeBilinear (#35)
🪜 layer_2d: Rotate2D (#34)
🪜 layer_2d: ResizeBilinear (#32)
🪜 layer_2d: Pad2D & Jitter2D (#30)
🪜 layer_2d: add tests for non dirty status (#27)
🪜 layer_2d: FTFrequences2D & Multiply2D (#25)
🪜 layer_2d: LinearScale2D (#24)
🪜 layer_2d: DecorelateRGB (#23)
🪜 layer_2d: RDFT2Image (#22)
🪜 core: Sigmoid activation (#21)
🚀 metal: systematic dispatchThreads API (#19)

Bug Fixes

🐛 fix: update correlation matrix coeffs (#37)
🐛 fix: ResizeBilinear to output deterministic dimensions (#33)

Miscellaneous Tasks

🔨 refactor: remove transaction (#31)
🚨 integration: activate DecorrelateRGB in test (#29)
🚨 integration: test IDFT and complex numbers (#28)
🔨 test: factorize transform tests (#26)
👷 ci: remove swift action (#20)
👷 ci: remove LFS (#17)

0.1.0 (2022-10-28)

Features

⚙️ core: remove incEpoch & applyGradient rename (#11)
🚀 examples: simple vgg trained on cifar (#9)
🪜 layer_2d: convolution, bn and other 2D layers (#7)
🪜 layer_1d: activation, fl, linear error, mse, select channels (#5)
⚙️ core: Layer architecture (#4)
⚙️ core: Optimizer architecture (#3)
⚙️ core: Model architecture (#2)
⚡️ metal: Metal architecture (#1)

Documentation

📚 update the readme and add documentation (#12)

Miscellaneous Tasks

🔧 chore: release 0.1.0 (#13)
🚀 test: reproducibility with PyTorch (#10)
🪜 test: layer2d (#8)
⚙️ test: optimizer, layer1d, clipping (#6)