-
Notifications
You must be signed in to change notification settings - Fork 646
[wip] Merge ndm and pte_data_map #11579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: have to remove the const qualifiers, as the flat tensor data map is modified. Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11579
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New FailuresAs of commit bb1695c with merge base c4c4763 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: have to remove the const qualifiers, as the flat tensor data map is modified. Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/) ghstack-source-id: 289741480 Pull Request resolved: #11579
This pull request was exported from Phabricator. Differential Revision: D76447708 |
This PR needs a
|
Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: have to remove the const qualifiers, as the flat tensor data map is modified. Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/) [ghstack-poisoned]
Pull Request resolved: #11579 Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: removing const qualifiers, as the flat tensor data map can potentially be modified now. Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/) ghstack-source-id: 289855981
This pull request was exported from Phabricator. Differential Revision: D76447708 |
Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: have to remove the const qualifiers, as the flat tensor data map is modified. Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/) [ghstack-poisoned]
Pull Request resolved: #11579 Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: removing const qualifiers, as the flat tensor data map can potentially be modified now. ghstack-source-id: 289886680 Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)
This pull request was exported from Phabricator. Differential Revision: D76447708 |
Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: have to remove the const qualifiers, as the flat tensor data map is modified. Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/) [ghstack-poisoned]
Pull Request resolved: #11579 Merge ndm and pte data map to enable cases like lora, where we introduce - shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now). - shared weights in external file Pass the merged data map into method, so it can access both. Note: removing const qualifiers, as the flat tensor data map can potentially be modified now. ghstack-source-id: 289887980 Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)
This pull request was exported from Phabricator. Differential Revision: D76447708 |
Stack from ghstack (oldest at bottom):
Merge ndm and pte data map to enable cases like lora, where we introduce
Pass the merged data map into method, so it can access both.
Note: have to remove the const qualifiers, as the flat tensor data map is modified.
Differential Revision: D76447708