Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FAW] parallel FreqAwareEmbedding #1424

Merged
merged 10 commits into from
Aug 10, 2022
Merged

[FAW] parallel FreqAwareEmbedding #1424

merged 10 commits into from
Aug 10, 2022

Conversation

feifeibear
Copy link
Contributor

No description provided.

@feifeibear
Copy link
Contributor Author

@feifeibear feifeibear marked this pull request as draft August 9, 2022 09:46
@feifeibear feifeibear marked this pull request as ready for review August 10, 2022 05:12
@feifeibear feifeibear changed the title Ops/cacheembedding3 [FAW] parallel FreqAwareEmbedding Aug 10, 2022
per_sample_weights, self.include_last_offset, self.padding_idx)

if shape_hook is not None:
output_shard = shape_hook(output_shard)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the shape hook might introduce some tensor view&squeeze operations.
I suppose we should be aware of that when using ColoParameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, can you update it in another PR? I copy this line from yours code.

Copy link
Contributor

@zxgx zxgx Aug 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ColoTensor's spec had troubles with view operations before.
Does it support ColoTensor.view and ColoTensor.transpose now?

@feifeibear feifeibear merged commit cb98cf5 into hpcaitech:main Aug 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants