Skip to content

Conversation

chenyang78
Copy link
Contributor

@chenyang78 chenyang78 commented Oct 4, 2023

Summary:
In some scenarios, we want to update constants at runtime.
In such cases, we have to keep the original constants in
the generated code without applying any constant-inlining
optimizations.

This PR adds a config to force us to add tensor constants.

Differential Revision: D49895154

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @kadeng @muchulee8 @aakhundov @ColinPeppler

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 4, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110491

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 4007e1e with merge base e8f1f4e (image):

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

name = f"constant_{name}"
# We may generate a var name for each constant in the codegen.
# Let's only keep sane characters.
name = re.sub(r"[^a-zA-Z0-9_]", "_", name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where are the non-indent chars coming from? Could this result in name collisions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The special chars come from the model. For example, some model that we are dealing with has a bias named like this abc:XYZ, where : is not allowed in a var name in C/C++.

Very good point regarding the name collisions! Yes, I think we could have name collisions, although the chance might be low. Perhaps we could add a counter suffix to make a unique name? e.g.

name = re.sub(r"[^a-zA-Z0-9_]", "_", name)
name = f"{name}_{next(self.constant_id)"

What do you think? Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is generated code and doesn't bleed into how we should store weights for weight refresh (which is presumably the unmodified original name as a string). Do any users need to reference the these generated names (which which case adding a suffix should be a last resort in case of duplicates, and track previously seen names), if it's just for readability during debug then adding the suffix doesn't appear to have much if any downside.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a suffix and also check for collision.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the user needs to reference the generated names.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a suffix and also check for collision.

Done. Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use while for adding suffix and checking.
Something like:

prefix = re.sub(r"[^a-zA-Z0-9_]", "_", name)
name = prefix
cnt = 0
while name in self.constants:
  name = f"{prefix}_{cnt}"
  cnt += 1

In this way the name is more tracible to the original model, since it's easy to see integers appended to some variable for un-named vars (is add_0 a constant or the result of some add operation?)
Also, I don't really need to assert if our artificial name collides, we could just create a new one instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! Fixed. Thanks.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

chenyang78 added a commit to chenyang78/pytorch that referenced this pull request Oct 4, 2023
)

Summary:

In some scenarios, we want to update constants at runtime.
In such cases, we have to keep the original constants in
the generated code without applying any constant-inlining
optimizations.

This PR adds a config to force us to add tensor constants.

Reviewed By: muchulee8

Differential Revision: D49895154
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

chenyang78 added a commit to chenyang78/pytorch that referenced this pull request Oct 4, 2023
)

Summary:

In some scenarios, we want to update constants at runtime.
In such cases, we have to keep the original constants in
the generated code without applying any constant-inlining
optimizations.

This PR adds a config to force us to add tensor constants.

Reviewed By: muchulee8

Differential Revision: D49895154
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

@chenyang78 chenyang78 force-pushed the export-D49895154 branch 2 times, most recently from dae0315 to cb396b5 Compare October 5, 2023 19:07
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

chenyang78 added a commit to chenyang78/pytorch that referenced this pull request Oct 5, 2023
)

Summary:

In some scenarios, we want to update constants at runtime.
In such cases, we have to keep the original constants in
the generated code without applying any constant-inlining
optimizations.

This PR adds a config to force us to add tensor constants.

Reviewed By: muchulee8

Differential Revision: D49895154
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

)

Summary:

In some scenarios, we want to update constants at runtime.
In such cases, we have to keep the original constants in
the generated code without applying any constant-inlining
optimizations.

This PR adds a config to force us to add tensor constants.

Reviewed By: muchulee8

Differential Revision: D49895154
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49895154

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 7, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants