Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add llama test #36

Merged
merged 1 commit into from
Aug 22, 2023
Merged

Add llama test #36

merged 1 commit into from
Aug 22, 2023

Conversation

123epsilon
Copy link
Contributor

@123epsilon 123epsilon commented Aug 22, 2023

Add a test case for passing llama through theturbine_cpu backend. This replaces all fairscale layers with corresponding vanilla torch layers for simplicity, but we can add these back later once we have llama working. Also removes the @torch.inference_mode() decorator to avoid the issue documented here, which is not necessarily relevant to the quality of our pipeline.

Copy link
Contributor

@stellaraccident stellaraccident left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you.

@123epsilon 123epsilon merged commit 3ede5b4 into main Aug 22, 2023
2 checks passed
@123epsilon 123epsilon deleted the add_llama_test branch August 22, 2023 17:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants