Skip to content

Conversation

@FanhaiLu1
Copy link
Collaborator

This PR do two things:
1: Gemma replica sharding be consistent as llama
2: Set bfloat16 or float32 in make_env_tiny

@FanhaiLu1 FanhaiLu1 requested review from qihqi and wang2yn84 May 7, 2024 02:06
@FanhaiLu1 FanhaiLu1 merged commit 93c8f8d into AI-Hypercomputer:main May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant