Skip to content

Conversation

@martin-gorner
Copy link
Contributor

This PR fixes several issues:

  1. The default Gemma layout map was still missing the fixes from issue 19496.
  2. The PR also adds a test for Gemma + get_layout_map + LoRA. This test fails without the get_layout_map fix in 1).
  3. The PR also adds a layout map for ffw_gating_2, which was apparently forgotten (but there was a test for it!). The value chosen has been tested to be the one offering the fastest training on TPU_v3_8.
  4. The PR also fixes a typo in the tests "ffw_linearl" => "ffw_linear"
  5. Finally, the PR changes the number of heads in the test Gemma config from 4 to 8, otherwise the tests won't pass on TPU

Both tests test_distribution and test_distribution_with_lora now pass on TPU_v3_8.
However, I had to manually change device detection from 'CPU' to 'TPU' in file gemma_backbone_test.py, or the distribution tests would be skipped. I suspect these tests are NOT run nightly and I kindly ask a team member to check this.

…he sharding spec ("batch", "model") is the one that provides the best training performance. ("batch", "model") and (None, None) are slower (the first one by 40%, the second by 2%).

Fixing test too, including typo ffw_linearl => ffw_linear
@github-actions github-actions bot added the Gemma Gemma model specific issues label Jun 19, 2024
…ds change necessary for the test to work on TPUs.

Also fixed formatting.
Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Just minor comments.

@mattdangerw mattdangerw added the kokoro:force-run Runs Tests on GPU label Jun 20, 2024
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Jun 20, 2024
Better test messages
@mattdangerw mattdangerw merged commit b58b56e into keras-team:master Jun 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Gemma Gemma model specific issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants