Skip to content

Conversation

@shewu-quic
Copy link
Collaborator

We need to apply r1 r2 before converting linear to conv.

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 10, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5221

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit dfd80b0 with merge base 657789e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 10, 2024
@shewu-quic
Copy link
Collaborator Author

shewu-quic commented Sep 10, 2024

Hi @cccclai,
I apologize for the inconvenience. It seems I misplaced the order of the transform during the rebase process.
Could you please review it again? I would greatly appreciate your help.
Thank you very much.

@shewu-quic
Copy link
Collaborator Author

I have a small request. I would like to add a document that explains how to export llama with Qualcomm AI Engine Direct, including steps like downloading spin quant and setting num_sharding. However, I’m not sure where the best place to save this file would be. Could you please advise?

@shewu-quic shewu-quic force-pushed the dev1/hutton/fixed_spin_quant_r1_r2 branch from 999a992 to 0e23b45 Compare September 10, 2024 11:03
Comment on lines +65 to +68
if self._generate_full_logits:
return torch.cat(result_logits, dim=1)
else:
return torch.stack(result_logits, dim=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm what's the difference between these?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the shape of the function output should be (batch, seq, vocab_size).
If _generate_full_logits, the shape of each result logit in result_logits are (batch, seq, vocab_size)
We could just use cat by dim=1.
If not _generate_full_logits, the shape of each result logit in result_logits are (batch, vocab_size).
We will need use stack to get one more dimension (batch, seq, vocab_size)

Copy link
Contributor

@cccclai cccclai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

@cccclai
Copy link
Contributor

cccclai commented Sep 10, 2024

CI needs to be fixed.

@shewu-quic shewu-quic force-pushed the dev1/hutton/fixed_spin_quant_r1_r2 branch from 807d763 to dfd80b0 Compare September 10, 2024 15:22
@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@cccclai cccclai merged commit c76b22f into pytorch:main Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants