Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed tensor parallelism splits #47

Merged
merged 4 commits into from
Nov 21, 2023
Merged

Fixed tensor parallelism splits #47

merged 4 commits into from
Nov 21, 2023

Conversation

tgaddair
Copy link
Contributor

After removing the layer abstraction over the LoRA weights, we introduced a transpose operation, which meant we needed to be splitting on dim=1 rather than dim=0. This was causing tensor parallelism to break, affecting all deployments with more than one GPU.

This PR also fixes support for o_proj, which is row-parallel and needs to be split on dim=0. Previously, there was a bug preventing k_proj and o_proj from being picked up correctly, which is why this was missed.

Closes #46.

@tgaddair tgaddair mentioned this pull request Nov 21, 2023
4 tasks
Copy link
Collaborator

@geoffreyangus geoffreyangus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tgaddair tgaddair merged commit 188834f into main Nov 21, 2023
1 check failed
@tgaddair tgaddair deleted the fix-tp branch November 21, 2023 04:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sharded adapters not working
2 participants