{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":742114490,"defaultBranch":"main","name":"executorch-1","ownerLogin":"SS-JIA","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2024-01-11T19:42:29.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/7695547?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1717181827.0","currentOid":""},"activityList":{"items":[{"before":"3f103654a96bd0702aaa96a7ed89d558acba20c3","after":"6be65169f5f6a0f89887cc7f7a6f107bb24629e1","ref":"refs/heads/export-D58016869","pushedAt":"2024-06-03T22:18:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Clean up Shader Profiling Capability (#3792)\n\nSummary:\n\n## Context\n\n1. Clean up `QueryPool` class; remove a bunch of unnecessary functionality, rename member functions to be more clear, etc.\n2. Rename `USE_VULKAN_GPU_DIAGNOSTICS` to `SHADER_PROFILING_ENABLED`\n3. Make it so that `QueryPool` is always included in `Context`; this is to reduce the amount of `#ifdef SHADER_PROFILING_ENABLED` blocks in order to improve readability and developer experience\n4. Enable shader profiling in `ComputeGraph`.\n\nDifferential Revision: D58016869","shortMessageHtmlLink":"Clean up Shader Profiling Capability (pytorch#3792)"}},{"before":"5e072c5c1085715e0ca73880a7290d635ba7fab7","after":"3f103654a96bd0702aaa96a7ed89d558acba20c3","ref":"refs/heads/export-D58016869","pushedAt":"2024-06-03T22:17:17.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Clean up Shader Profiling Capability\n\nSummary:\n## Context\n\n1. Clean up `QueryPool` class; remove a bunch of unnecessary functionality, rename member functions to be more clear, etc.\n2. Rename `USE_VULKAN_GPU_DIAGNOSTICS` to `SHADER_PROFILING_ENABLED`\n3. Make it so that `QueryPool` is always included in `Context`; this is to reduce the amount of `#ifdef SHADER_PROFILING_ENABLED` blocks in order to improve readability and developer experience\n4. Enable shader profiling in `ComputeGraph`.\n\nDifferential Revision: D58016869","shortMessageHtmlLink":"Clean up Shader Profiling Capability"}},{"before":null,"after":"5e072c5c1085715e0ca73880a7290d635ba7fab7","ref":"refs/heads/export-D58016869","pushedAt":"2024-05-31T18:57:07.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Clean up Shader Profiling Capability\n\nSummary:\n## Context\n\n1. Clean up `QueryPool` class; remove a bunch of unnecessary functionality, rename member functions to be more clear, etc.\n2. Rename `USE_VULKAN_GPU_DIAGNOSTICS` to `VULKAN_SHADER_PROFILING_ENABLED`\n3. Make it so that `QueryPool` is always included in `Context`; this is to reduce the amount of `#ifdef VULKAN_SHADER_PROFILING_ENABLED` blocks in order to improve readability and developer experience\n4. Enable shader profiling in `ComputeGraph`.\n\nDifferential Revision: D58016869","shortMessageHtmlLink":"Clean up Shader Profiling Capability"}},{"before":"d6a034f355866ef5eb35075a060ce69aca27cc3c","after":"f4a3a6acc39f24e159ba51d0b2ab7a03be854e08","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T23:28:27.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"b7082832024fd7c8f1a75936a8628bfdd0261e6c","after":"d6a034f355866ef5eb35075a060ce69aca27cc3c","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T23:21:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"a6bab9503852dea4a0f570894f10882ff76c4fdc","after":"b7082832024fd7c8f1a75936a8628bfdd0261e6c","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T21:54:29.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"2380ff0c0f6af09fbc5ccd942cb8802899ed5805","after":"a6bab9503852dea4a0f570894f10882ff76c4fdc","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T14:41:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"a03f9dc3ba5d2e7f543c9a21a4ff2fd504ad4051","after":"2380ff0c0f6af09fbc5ccd942cb8802899ed5805","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T14:36:22.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"86f90771a263794c9cafeb4670fdd2f3be873d72","after":"a03f9dc3ba5d2e7f543c9a21a4ff2fd504ad4051","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T14:31:00.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"52ff39b81c6aecab1f6dbe79e5a0be39b9bab489","after":"86f90771a263794c9cafeb4670fdd2f3be873d72","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-22T14:25:44.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"009ba2c267a3a789f5efa0d84f0651a7eaccbbe1","after":"438e6d02d054875fa259d2078ed13e1e370abce7","ref":"refs/heads/export-D57655257","pushedAt":"2024-05-22T04:25:28.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Fix zero size tensors (#3702)\n\nSummary:\n\n## Context\n\nDispatching a command buffer with a work group size that contains 0 is undefined behaviour. On some devices, this can cause the device to be lost. Fix this by setting the work group size to `{1, 1, 1}` right before dispatching a command buffer if the work group size contains a 0.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57655257","shortMessageHtmlLink":"Fix zero size tensors (pytorch#3702)"}},{"before":"abb740d3a693f6f79fae0e8f89b481996166758a","after":"009ba2c267a3a789f5efa0d84f0651a7eaccbbe1","ref":"refs/heads/export-D57655257","pushedAt":"2024-05-22T03:13:17.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Fix zero size tensors (#3702)\n\nSummary:\n\n## Context\n\nDispatching a command buffer with a work group size that contains 0 is undefined behaviour. On some devices, this can cause the device to be lost. Fix this by setting the work group size to `{1, 1, 1}` right before dispatching a command buffer if the work group size contains a 0.\n\nDifferential Revision: D57655257","shortMessageHtmlLink":"Fix zero size tensors (pytorch#3702)"}},{"before":null,"after":"abb740d3a693f6f79fae0e8f89b481996166758a","ref":"refs/heads/export-D57655257","pushedAt":"2024-05-22T03:12:08.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Fix zero size tensors\n\nSummary:\n## Context\n\nDispatching a command buffer with a work group size that contains 0 is undefined behaviour. On some devices, this can cause the device to be lost. Fix this by setting the work group size to `{1, 1, 1}` right before dispatching a command buffer if the work group size contains a 0.\n\nDifferential Revision: D57655257","shortMessageHtmlLink":"Fix zero size tensors"}},{"before":"c670e6f8798651a3c2e5bd843babc4b55f96702c","after":"52ff39b81c6aecab1f6dbe79e5a0be39b9bab489","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-21T22:29:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add support for buffer storage tensors (#3684)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3684\n\n## Context\n\nAdd support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add support for buffer storage tensors (pytorch#3684)"}},{"before":"8dc7fdf8ea3847eea7b65191cfe60828d319f4ec","after":"c670e6f8798651a3c2e5bd843babc4b55f96702c","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-21T14:30:26.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add transfer shaders for buffer storage tensors (#3684)\n\nSummary:\n\n## Context\n\nAdd transfer shaders for tensors that use buffer storage, in preparation for quantization support.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add transfer shaders for buffer storage tensors (pytorch#3684)"}},{"before":"9eedc1fca2c42fcc2b38356cc61f7e78abc1f692","after":"8dc7fdf8ea3847eea7b65191cfe60828d319f4ec","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-21T14:28:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add transfer shaders for buffer storage tensors (#3684)\n\nSummary:\n\n## Context\n\nAdd transfer shaders for tensors that use buffer storage, in preparation for quantization support.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add transfer shaders for buffer storage tensors (pytorch#3684)"}},{"before":"357740480f4a278b6fc06705d561ab09160f3ea2","after":"9eedc1fca2c42fcc2b38356cc61f7e78abc1f692","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-20T20:16:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add transfer shaders for buffer storage tensors (#3684)\n\nSummary:\n\n## Context\n\nAdd transfer shaders for tensors that use buffer storage, in preparation for quantization support.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add transfer shaders for buffer storage tensors (pytorch#3684)"}},{"before":"c951e5d26b37368942872bd06086b731ae678b6c","after":"357740480f4a278b6fc06705d561ab09160f3ea2","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-20T20:05:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add transfer shaders for buffer storage tensors (#3684)\n\nSummary:\n\n## Context\n\nAdd transfer shaders for tensors that use buffer storage, in preparation for quantization support.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add transfer shaders for buffer storage tensors (pytorch#3684)"}},{"before":null,"after":"c951e5d26b37368942872bd06086b731ae678b6c","ref":"refs/heads/export-D57577019","pushedAt":"2024-05-20T18:40:24.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add transfer shaders for buffer storage tensors\n\nSummary:\n## Context\n\nAdd transfer shaders for tensors that use buffer storage, in preparation for quantization support.\n\nDifferential Revision: D57577019","shortMessageHtmlLink":"Add transfer shaders for buffer storage tensors"}},{"before":null,"after":"e6648e92db617b0a53e62da6e3c4189bf10675da","ref":"refs/heads/export-D57463151","pushedAt":"2024-05-16T21:50:25.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Add tests for zero-dim tensors\n\nSummary: Turns out zero dim tensors don't need anything special to be enabled. Therefore just add test cases for them.\n\nDifferential Revision: D57463151","shortMessageHtmlLink":"Add tests for zero-dim tensors"}},{"before":"c25d11dac9303360608e0d17c357f808a6901392","after":"b49c28d3b4e123f629f979c1dcba53a38941ea4d","ref":"refs/heads/export-D57450473","pushedAt":"2024-05-16T19:47:41.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Enable zero-size tensors (#3640)\n\nSummary:\n\nAs title.\n\nThe approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57450473","shortMessageHtmlLink":"Enable zero-size tensors (pytorch#3640)"}},{"before":"2abf37b6c525543c934307c2072a21d877d96964","after":"c25d11dac9303360608e0d17c357f808a6901392","ref":"refs/heads/export-D57450473","pushedAt":"2024-05-16T19:46:41.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Enable zero-size tensors (#3640)\n\nSummary:\n\nAs title.\n\nThe approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57450473","shortMessageHtmlLink":"Enable zero-size tensors (pytorch#3640)"}},{"before":"3a8a8f02eff7bb95f8769706a6a449863cef9b43","after":"2abf37b6c525543c934307c2072a21d877d96964","ref":"refs/heads/export-D57450473","pushedAt":"2024-05-16T18:37:10.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Enable zero-size tensors (#3640)\n\nSummary:\n\nAs title.\n\nThe approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.\n\nDifferential Revision: D57450473","shortMessageHtmlLink":"Enable zero-size tensors (pytorch#3640)"}},{"before":null,"after":"3a8a8f02eff7bb95f8769706a6a449863cef9b43","ref":"refs/heads/export-D57450473","pushedAt":"2024-05-16T18:27:09.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Enable zero-size tensors\n\nSummary:\nAs title.\n\nThe approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.\n\nDifferential Revision: D57450473","shortMessageHtmlLink":"Enable zero-size tensors"}},{"before":"bf3a56f94dd6833d98463c43221932665f1dfa46","after":"a3b80a7107f347365ce54347ec3ef7f5150e85b6","ref":"refs/heads/export-D57203869","pushedAt":"2024-05-14T20:14:24.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Implement `aten.linear.default` (#3594)\n\nSummary:\n\nAs title.\n\nImplementation is rather simple because the shaders just have to accumulate the `mat2` shader across the width dim rather than the height dim.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57203869","shortMessageHtmlLink":"Implement aten.linear.default (pytorch#3594)"}},{"before":"82bdb1da17c2b65e8b9e429657f2db8a0e3cf949","after":"bf3a56f94dd6833d98463c43221932665f1dfa46","ref":"refs/heads/export-D57203869","pushedAt":"2024-05-14T20:13:27.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Implement `aten.linear.default` (#3594)\n\nSummary:\n\nAs title.\n\nImplementation is rather simple because the shaders just have to accumulate the `mat2` shader across the width dim rather than the height dim.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57203869","shortMessageHtmlLink":"Implement aten.linear.default (pytorch#3594)"}},{"before":"3df8490c4d271c7ebe7ca0eaba109970335e6d92","after":"82bdb1da17c2b65e8b9e429657f2db8a0e3cf949","ref":"refs/heads/export-D57203869","pushedAt":"2024-05-14T19:06:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Implement `aten.linear.default` (#3594)\n\nSummary:\n\nAs title.\n\nImplementation is rather simple because the shaders just have to accumulate the `mat2` shader across the width dim rather than the height dim.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57203869","shortMessageHtmlLink":"Implement aten.linear.default (pytorch#3594)"}},{"before":"6c2e9024828f11f38448a6e0d70581154f3aa3ab","after":"3df8490c4d271c7ebe7ca0eaba109970335e6d92","ref":"refs/heads/export-D57203869","pushedAt":"2024-05-14T19:05:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Implement `aten.linear.default` (#3594)\n\nSummary:\n\nAs title.\n\nImplementation is rather simple because the shaders just have to accumulate the `mat2` shader across the width dim rather than the height dim.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57203869","shortMessageHtmlLink":"Implement aten.linear.default (pytorch#3594)"}},{"before":"20c6e3e1b3f4eaebddb6aed3d0a403a108c94b85","after":"6c2e9024828f11f38448a6e0d70581154f3aa3ab","ref":"refs/heads/export-D57203869","pushedAt":"2024-05-14T16:16:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Implement `aten.linear.default` (#3594)\n\nSummary:\n\nAs title.\n\nImplementation is rather simple because the shaders just have to accumulate the `mat2` shader across the width dim rather than the height dim.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57203869","shortMessageHtmlLink":"Implement aten.linear.default (pytorch#3594)"}},{"before":"635684f9cb49b7b298b88f8946b8485d613ee99e","after":"20c6e3e1b3f4eaebddb6aed3d0a403a108c94b85","ref":"refs/heads/export-D57203869","pushedAt":"2024-05-14T16:15:51.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Implement `aten.linear.default` (#3594)\n\nSummary:\n\nAs title.\n\nImplementation is rather simple because the shaders just have to accumulate the `mat2` shader across the width dim rather than the height dim.\n\nReviewed By: yipjustin\n\nDifferential Revision: D57203869","shortMessageHtmlLink":"Implement aten.linear.default (pytorch#3594)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEW2uBXQA","startCursor":null,"endCursor":null}},"title":"Activity ยท SS-JIA/executorch-1"}