Skip to content

Conversation

@JacobSzwejbka
Copy link
Contributor

Summary: Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively.

Differential Revision: D68578075

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 27, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7965

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit b73baeb with merge base cd51da4 (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 27, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68578075

JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Jan 27, 2025
Summary:

Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively.

Differential Revision: D68578075
JacobSzwejbka added a commit to JacobSzwejbka/executorch-1 that referenced this pull request Jan 27, 2025
Summary:

Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively.

Differential Revision: D68578075
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68578075

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68578075

Copy link
Contributor

@lucylq lucylq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm,

nit to change extended_header to flat_tensor_header for flat_tensor?

Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Thanks for making this change.

A good double-check would be to try parsing a .ptd file using the flatc commandline tool, like

flatc --json --defaults-json --strict-json -o /tmp extension/flat_tensor/serialize/flat_tensor.fbs -- input.ptd

and see if it was able to create /tmp/input.json

Summary:

Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively.

Reviewed By: lucylq

Differential Revision: D68578075
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68578075

@JacobSzwejbka
Copy link
Contributor Author

Ran the command Dave suggested on a local update to the train xor stack that makes it use .ptd and it worked (ill land that change later once the runtime can load .ptds

@facebook-github-bot facebook-github-bot merged commit 3540723 into pytorch:main Jan 28, 2025
44 of 47 checks passed
YIWENX14 pushed a commit that referenced this pull request Jan 28, 2025
Differential Revision: D68578075

Pull Request resolved: #7965
zonglinpeng pushed a commit to zonglinpeng/executorch that referenced this pull request Jan 30, 2025
Differential Revision: D68578075

Pull Request resolved: pytorch#7965
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported topic: not user facing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants