{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":531252554,"defaultBranch":"main","name":"rten","ownerLogin":"robertknight","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2022-08-31T20:43:45.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/2458?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1718831144.0","currentOid":""},"activityList":{"items":[{"before":"9166b05449f9a07d67fc3dab6e47fffe491207b8","after":"98c7c198af8df81660bf42f12e108ec24dbee1c2","ref":"refs/heads/transpose-matmul-fusion","pushedAt":"2024-06-20T07:04:44.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Add Transpose + MatMul fusion\n\nThis fuses subgraphs of the form:\n\n```\n(Transpose(X), Y) -> MatMul\n```\n\nThe fused operation transposes the `X` view before invoking the `MatMul` op with\nthe transposed view. This avoids materializing the transposed matrix.\n\nThis fusion works because the `MatMul` operation can efficiently handle\ntransposed input views. Other operations could also support an extended version\nof this fusion in future.","shortMessageHtmlLink":"Add Transpose + MatMul fusion"}},{"before":"2e317700412f4046d279f749c2c4dc71f277a810","after":"9166b05449f9a07d67fc3dab6e47fffe491207b8","ref":"refs/heads/transpose-matmul-fusion","pushedAt":"2024-06-20T06:59:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Add Transpose + MatMul fusion\n\nThis fuses subgraphs of the form:\n\n```\n(Transpose(X), Y) -> MatMul\n```\n\nThe fused operation transposes the `X` view before invoking the `MatMul` op with\nthe transposed view. This avoids materializing the transposed matrix.\n\nThis fusion works because the `MatMul` operation can efficiently handle\ntransposed input views. Other operations could also support an extended version\nof this fusion in future.","shortMessageHtmlLink":"Add Transpose + MatMul fusion"}},{"before":null,"after":"2e317700412f4046d279f749c2c4dc71f277a810","ref":"refs/heads/transpose-matmul-fusion","pushedAt":"2024-06-19T21:05:44.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Add Transpose + MatMul fusion\n\nThis fuses subgraphs of the form:\n\n```\n(Transpose(X), Y) -> MatMul\n```\n\nThe fused operation transposes the `X` view before invoking the `MatMul` op with\nthe transposed view. This avoids materializing the transposed matrix.\n\nThis fusion works because the `MatMul` operation can efficiently handle\ntransposed input views. Other operations could also support an extended version\nof this fusion in future.","shortMessageHtmlLink":"Add Transpose + MatMul fusion"}},{"before":"d99e393d4b75891fc85b0c105ee2893185a81cd0","after":null,"ref":"refs/heads/const-prop-optimize","pushedAt":"2024-06-18T08:18:24.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"}},{"before":"da251b329482a101b0b24e91c53ea5a74f2bf213","after":"7750899add6516f5394687ba2f2deb09fc3d5ff6","ref":"refs/heads/main","pushedAt":"2024-06-18T08:18:21.000Z","pushType":"pr_merge","commitsCount":6,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Merge pull request #241 from robertknight/const-prop-optimize\n\nPerform constant propagation when loading models","shortMessageHtmlLink":"Merge pull request #241 from robertknight/const-prop-optimize"}},{"before":"75570a983d90a178ddcfe9409816eab4648bc7de","after":"d99e393d4b75891fc85b0c105ee2893185a81cd0","ref":"refs/heads/const-prop-optimize","pushedAt":"2024-06-18T08:15:01.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Add a note on graph optimizations to the `Model` docs","shortMessageHtmlLink":"Add a note on graph optimizations to the Model docs"}},{"before":"4a7cfe5c73435aff752f17ed01b7308e7739d3f5","after":"75570a983d90a178ddcfe9409816eab4648bc7de","ref":"refs/heads/const-prop-optimize","pushedAt":"2024-06-18T08:02:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Skip constant propagation in rten-generate if not required\n\nSince constant propagation is now performed as a graph optimization when the\nmodel is loaded, it only needs to be re-done if additional constants are added.\n\nThis change assumes that graph optimizations were enabled when the model was\nloaded. If they were disabled, and no other constants were added, this could\nlead to expensive re-evaluation of unchanging parts of the graph on each run.","shortMessageHtmlLink":"Skip constant propagation in rten-generate if not required"}},{"before":"cf31d1a9e1cea733a6a72c2f243d554359f67c0d","after":"4a7cfe5c73435aff752f17ed01b7308e7739d3f5","ref":"refs/heads/const-prop-optimize","pushedAt":"2024-06-18T06:12:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Skip constant propagation in rten-generate if not required\n\nSince constant propagation is now performed as a graph optimization when the\nmodel is loaded, it only needs to be re-done if additional constants are added.\n\nThis change assumes that graph optimizations were enabled when the model was\nloaded. If they were disabled, and no other constants were added, this could\nlead to expensive re-evaluation of unchanging parts of the graph on each run.","shortMessageHtmlLink":"Skip constant propagation in rten-generate if not required"}},{"before":"81e9a68293f42f369da2a506a7ec366d2866c6ab","after":"cf31d1a9e1cea733a6a72c2f243d554359f67c0d","ref":"refs/heads/const-prop-optimize","pushedAt":"2024-06-18T06:04:03.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Skip constant propagation in rten-generate if not required\n\nSince constant propagation is now performed as a graph optimization when the\nmodel is loaded, it only needs to be re-done if additional constants are added.\n\nThis change assumes that graph optimizations were enabled when the model was\nloaded. If they were disabled, and no other constants were added, this could\nlead to expensive re-evaluation of unchanging parts of the graph on each run.","shortMessageHtmlLink":"Skip constant propagation in rten-generate if not required"}},{"before":null,"after":"81e9a68293f42f369da2a506a7ec366d2866c6ab","ref":"refs/heads/const-prop-optimize","pushedAt":"2024-06-17T07:05:20.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Skip constant propagation in rten-generate if not required\n\nSince constant propagation is now performed as a graph optimization when the\nmodel is loaded, it only needs to be re-done if additional constants are added.\n\nThis change assumes that graph optimizations were enabled when the model was\nloaded. If they were disabled, and no other constants were added, this could\nlead to expensive re-evaluation of unchanging parts of the graph on each run.","shortMessageHtmlLink":"Skip constant propagation in rten-generate if not required"}},{"before":"32d8bac35f1f750708e3e4997141c5c61a80efa6","after":null,"ref":"refs/heads/partial-run-non-deterministic-ops","pushedAt":"2024-06-16T11:57:58.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"}},{"before":"8d5eb745f0d8d42572208a4e066ae58b25558113","after":"da251b329482a101b0b24e91c53ea5a74f2bf213","ref":"refs/heads/main","pushedAt":"2024-06-16T11:57:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Merge pull request #240 from robertknight/partial-run-non-deterministic-ops\n\nPrevent `Model::partial_run` from propagating values through random ops","shortMessageHtmlLink":"Merge pull request #240 from robertknight/partial-run-non-determinist…"}},{"before":null,"after":"32d8bac35f1f750708e3e4997141c5c61a80efa6","ref":"refs/heads/partial-run-non-deterministic-ops","pushedAt":"2024-06-16T11:54:40.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Prevent `Model::partial_run` from propagating values through random ops\n\nAllow operators to declare whether they are deterministic or not, make the\n`Random*` ops return `false`, and prevent `Graph::partial_run` from propagating\nvalues through non-deterministic operators.\n\nWith this change it should become safe to apply constant propagation as an\noptimization when models are loaded.\n\nFixes https://github.com/robertknight/rten/issues/90.","shortMessageHtmlLink":"Prevent Model::partial_run from propagating values through random ops"}},{"before":"6e32fb59669fc94854c5ec9798f024a8354a8925","after":null,"ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T18:33:45.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"}},{"before":"917101bd04b455b58603f95b4dcc2f9647700b87","after":"8d5eb745f0d8d42572208a4e066ae58b25558113","ref":"refs/heads/main","pushedAt":"2024-06-15T18:33:41.000Z","pushType":"pr_merge","commitsCount":6,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Merge pull request #239 from robertknight/kv-cache-in-place-concat\n\nReduce KV-cache growth cost from `O(sequence_len)` to `O(1)`","shortMessageHtmlLink":"Merge pull request #239 from robertknight/kv-cache-in-place-concat"}},{"before":"e57479416e3e4474f7af05ab484043c31f684e20","after":"6e32fb59669fc94854c5ec9798f024a8354a8925","ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T18:30:33.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Pre-allocate space for KV cache in generator\n\nMake growth of the KV cache more efficient by pre-allocating space for the\nmaximum expected sequence length and passing the KV-cache as an owned tensor\ninto the model. When the `Concat` operator is used to update the KV-cache, it\nwill run in-place since the KV-cache input tensor already has enough capacity.\nThis reduces the cost of updating the KV cache from `O(seq_len)` to `O(1)` at\neach step.\n\nFor this optimization to work, the model has to satisfy some assumptions:\n\n1. The updated KV cache is produced using `Concat`, not via some other sequence\n of operations.\n2. Each KV cache input is passed into `Concat` as the first operand, without any\n earlier operations that would cause it to be copied.\n3. The output of each KV cache concatenation is returned from the model as an\n updated KV-cache output, without any operations after the `Concat` that would\n cause it to be copied.\n\nOn the GPT-2 demo this reduces generation time significantly for long outputs.\neg. For 300 tokens this reduced generation time from ~28s to ~21s on my laptop.","shortMessageHtmlLink":"Pre-allocate space for KV cache in generator"}},{"before":"cd2995a944d00ab48a4db2131a66ddc91c40ea27","after":"e57479416e3e4474f7af05ab484043c31f684e20","ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T18:14:02.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Pre-allocate space for KV cache in generator\n\nMake growth of the KV cache more efficient by pre-allocating space for the\nmaximum expected sequence length and passing the KV-cache as an owned tensor\ninto the model. When the `Concat` operator is used to update the KV-cache, it\nwill run in-place since the KV-cache input tensor already has enough capacity.\nThis reduces the cost of updating the KV cache from `O(seq_len)` to `O(1)` at\neach step.\n\nFor this optimization to work, the model has to satisfy some assumptions:\n\n1. The updated KV cache is produced using `Concat`, not via some other sequence\n of operations.\n2. Each KV cache input is passed into `Concat` as the first operand, without any\n earlier operations that would cause it to be copied.\n3. The output of each KV cache concatenation is returned from the model as an\n updated KV-cache output, without any operations after the `Concat` that would\n cause it to be copied.\n\nOn the GPT-2 demo this reduces generation time significantly for long outputs.\neg. For 300 tokens this reduced generation time from ~28s to ~21s on my laptop.","shortMessageHtmlLink":"Pre-allocate space for KV cache in generator"}},{"before":"7b5110a851067c9dbba5e5276ba0f9bf44b70385","after":"cd2995a944d00ab48a4db2131a66ddc91c40ea27","ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T14:11:40.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Pre-allocate space for KV cache in generator\n\nMake growth of the KV cache more efficient by pre-allocating space for the\nmaximum expected sequence length and passing the KV-cache as an owned tensor\ninto the model. When the `Concat` operator is used to update the KV-cache, it\nwill run in-place since the KV-cache input tensor already has enough capacity.\nThis reduces the cost of updating the KV cache from `O(seq_len)` to `O(1)` at\neach step.\n\nFor this optimization to work, the model has to satisfy some assumptions:\n\n1. The updated KV cache is produced using `Concat`, not via some other sequence\n of operations.\n2. Each KV cache input is passed into `Concat` as the first operand, without any\n earlier operations that would cause it to be copied.\n3. The output of each KV cache concatenation is returned from the model as an\n updated KV-cache output, without any operations after the `Concat` that would\n cause it to be copied.\n\nOn the GPT-2 demo this reduces generation time significantly for long outputs.\neg. For 300 tokens this reduced generation time from ~28s to ~21s on my laptop.","shortMessageHtmlLink":"Pre-allocate space for KV cache in generator"}},{"before":"51bfdaee05a49c8b25ae8eaa2aa9669855b4368b","after":"7b5110a851067c9dbba5e5276ba0f9bf44b70385","ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T13:54:09.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Pre-allocate space for KV cache in generator\n\nMake growth of the KV cache more efficient by pre-allocating space for the\nmaximum expected sequence length and passing the KV-cache as an owned tensor\ninto the model. When the `Concat` operator is used to update the KV-cache, it\nwill run in-place since the KV-cache input tensor already has enough capacity.\nThis reduces the cost of updating the KV cache from `O(seq_len)` to `O(1)` at\neach step.\n\nFor this optimization to work, the model has to satisfy some assumptions:\n\n1. The updated KV cache is produced using `Concat`, not via some other sequence\n of operations.\n2. Each KV cache input is passed into `Concat` as the first operand, without any\n earlier operations that would cause it to be copied.\n3. The output of each KV cache concatenation is returned from the model as an\n updated KV-cache output, without any operations after the `Concat` that would\n cause it to be copied.\n\nOn the GPT-2 demo this reduces generation time significantly for long outputs.\neg. For 300 tokens this reduced generation time from ~28s to ~21s on my laptop.","shortMessageHtmlLink":"Pre-allocate space for KV cache in generator"}},{"before":"90cbe14f4dfc0de3042be75e49d122547b480407","after":"51bfdaee05a49c8b25ae8eaa2aa9669855b4368b","ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T13:02:43.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"WIP - Pre-allocate space for KV cache in generator\n\nMake growth of the KV cache more efficient by pre-allocating space for the\nmaximum expected sequence length and passing the KV-cache as an owned tensor\ninto the model. When the `Concat` operator is used to update the KV-cache, it\nwill run in-place since the KV-cache input tensor already has enough capacity.\nThis reduces the cost of updating the KV cache from `O(seq_len)` to `O(1)` at\neach step.\n\nFor this optimization to work, the model has to satisfy some assumptions:\n\n1. The updated KV cache is produced using `Concat`, not via some other sequence\n of operations.\n2. Each KV cache input is passed into `Concat` as the first operand, without any\n earlier operations that would cause it to be copied.\n3. The output of each KV cache concatenation is returned from the model as an\n updated KV-cache output, without any operations after the `Concat` that would\n cause it to be copied.","shortMessageHtmlLink":"WIP - Pre-allocate space for KV cache in generator"}},{"before":"550e9d6938100e319b69e3e4460e491b975add8d","after":null,"ref":"refs/heads/explicit-transmute","pushedAt":"2024-06-15T13:01:29.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"}},{"before":"c6432166c43b1b38138c342b2afded56f0ca1f1a","after":"917101bd04b455b58603f95b4dcc2f9647700b87","ref":"refs/heads/main","pushedAt":"2024-06-15T13:01:26.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Merge pull request #238 from robertknight/explicit-transmute\n\nExplicitly specify generic args for `transmute` calls","shortMessageHtmlLink":"Merge pull request #238 from robertknight/explicit-transmute"}},{"before":null,"after":"550e9d6938100e319b69e3e4460e491b975add8d","ref":"refs/heads/explicit-transmute","pushedAt":"2024-06-15T12:57:58.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Explicitly specify generic args for `transmute` calls\n\nThis addresses warnings from clippy added in Rust v1.79. See\nhttps://rust-lang.github.io/rust-clippy/master/index.html#/missing_transmute_annotations.","shortMessageHtmlLink":"Explicitly specify generic args for transmute calls"}},{"before":null,"after":"90cbe14f4dfc0de3042be75e49d122547b480407","ref":"refs/heads/kv-cache-in-place-concat","pushedAt":"2024-06-15T10:54:07.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"WIP - Pre-allocate space for KV cache in generator\n\nMake growth of the KV cache more efficient by pre-allocating space for the\nmaximum expected sequence length and passing the KV-cache as an owned tensor\ninto the model. When the `Concat` operator is used to update the KV-cache, it\nwill run in-place since the KV-cache input tensor already has enough capacity.\nThis reduces the cost of updating the KV cache from `O(seq_len)` to `O(1)` at\neach step.\n\nFor this optimization to work, the model has to satisfy some assumptions:\n\n1. The updated KV cache is produced using `Concat`, not via some other sequence\n of operations.\n2. Each KV cache input is passed into `Concat` as the first operand, without any\n earlier operations that would cause it to be copied.\n3. The output of each KV cache concatenation is returned from the model as an\n updated KV-cache output, without any operations after the `Concat` that would\n cause it to be copied.","shortMessageHtmlLink":"WIP - Pre-allocate space for KV cache in generator"}},{"before":"1ce7b44259080d3ad83a3bdcf903942ede76d98c","after":null,"ref":"refs/heads/generator-bpe-decode","pushedAt":"2024-06-12T20:34:17.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"}},{"before":"4c14b349cbcb6224e4cfff872b258f69d75b6f5a","after":"c6432166c43b1b38138c342b2afded56f0ca1f1a","ref":"refs/heads/main","pushedAt":"2024-06-12T20:34:14.000Z","pushType":"pr_merge","commitsCount":7,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Merge pull request #236 from robertknight/generator-bpe-decode\n\nImprove handling of decoding errors and special tokens in `TextGenerator::next`","shortMessageHtmlLink":"Merge pull request #236 from robertknight/generator-bpe-decode"}},{"before":"8e17097592a54818e23ba1b05feb3cf8f6e861fc","after":"1ce7b44259080d3ad83a3bdcf903942ede76d98c","ref":"refs/heads/generator-bpe-decode","pushedAt":"2024-06-12T20:32:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Apply iterator simplification suggested by clippy","shortMessageHtmlLink":"Apply iterator simplification suggested by clippy"}},{"before":null,"after":"8e17097592a54818e23ba1b05feb3cf8f6e861fc","ref":"refs/heads/generator-bpe-decode","pushedAt":"2024-06-12T20:06:51.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Apply iterator simplification suggested by clippy","shortMessageHtmlLink":"Apply iterator simplification suggested by clippy"}},{"before":"445e9c1f3fff878a06a7f1a0dc84fbce1f698f92","after":null,"ref":"refs/heads/generator-topk","pushedAt":"2024-06-11T20:31:52.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"}},{"before":"163a4c6e927d0693bd17c67463b50b763e490b58","after":"4c14b349cbcb6224e4cfff872b258f69d75b6f5a","ref":"refs/heads/main","pushedAt":"2024-06-11T20:31:49.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"robertknight","name":"Robert Knight","path":"/robertknight","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2458?s=80&v=4"},"commit":{"message":"Merge pull request #235 from robertknight/generator-topk\n\nSupport top-K sampling in rten-generate","shortMessageHtmlLink":"Merge pull request #235 from robertknight/generator-topk"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEajxalgA","startCursor":null,"endCursor":null}},"title":"Activity · robertknight/rten"}