Skip to content

[wip] Merge ndm and pte_data_map #11579

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

lucylq
Copy link
Contributor

@lucylq lucylq commented Jun 11, 2025

Stack from ghstack (oldest at bottom):

Merge ndm and pte data map to enable cases like lora, where we introduce

  • shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
  • shared weights in external file

Pass the merged data map into method, so it can access both.

Note: have to remove the const qualifiers, as the flat tensor data map is modified.

Differential Revision: D76447708

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: have to remove the const qualifiers, as the flat tensor data map is modified.

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Jun 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11579

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures

As of commit bb1695c with merge base c4c4763 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

lucylq added a commit that referenced this pull request Jun 11, 2025
Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: have to remove the const qualifiers, as the flat tensor data map is modified.

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)

ghstack-source-id: 289741480
Pull Request resolved: #11579
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 11, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76447708

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: have to remove the const qualifiers, as the flat tensor data map is modified.

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)

[ghstack-poisoned]
lucylq added a commit that referenced this pull request Jun 12, 2025
Pull Request resolved: #11579

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: removing const qualifiers, as the flat tensor data map can potentially be modified now.

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)
ghstack-source-id: 289855981
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76447708

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: have to remove the const qualifiers, as the flat tensor data map is modified.

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)

[ghstack-poisoned]
lucylq added a commit that referenced this pull request Jun 12, 2025
Pull Request resolved: #11579

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: removing const qualifiers, as the flat tensor data map can potentially be modified now.
ghstack-source-id: 289886680

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76447708

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: have to remove the const qualifiers, as the flat tensor data map is modified.

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)

[ghstack-poisoned]
lucylq added a commit that referenced this pull request Jun 12, 2025
Pull Request resolved: #11579

Merge ndm and pte data map to enable cases like lora, where we introduce
- shared weights in the pte file (this is true of any xnnpack pte file, as weights are placed in shared space by default now).
- shared weights in external file

Pass the merged data map into method, so it can access both.

Note: removing const qualifiers, as the flat tensor data map can potentially be modified now.
ghstack-source-id: 289887980

Differential Revision: [D76447708](https://our.internmc.facebook.com/intern/diff/D76447708/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76447708

@lucylq lucylq changed the title Merge ndm and pte_data_map [wip] Merge ndm and pte_data_map Jun 12, 2025
@lucylq lucylq closed this Aug 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants