[docs] training on specific hardware#44799
[docs] training on specific hardware#44799stevhliu wants to merge 6 commits intohuggingface:mainfrom
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
thanks @pcuenca! happy to get your feedback on the rest if you don't mind/have the time! |
| - local: perf_train_cpu_many | ||
| title: Distributed CPUs |
There was a problem hiding this comment.
Should we redirect to perf_train_cpu?
There was a problem hiding this comment.
i think its better to merge the two cpu docs rather than redirect. the single perf_train_cpu doc is already quite thin and perf_train_cpu_many don't really fit the other docs in the section which are more focused on methods rather than hardware
There was a problem hiding this comment.
Yes, I agree! I'm talking about avoiding a 404 when users visit https://huggingface.co/docs/transformers/en/perf_train_cpu_many after it's gone.
There was a problem hiding this comment.
ohhh yes, my bad i misunderstood!
docs/source/en/perf_train_gaudi.md
Outdated
| Refer to the [Gaudi docs](https://docs.habana.ai/en/latest/index.html) for more details. | ||
| ## Mixed precision | ||
|
|
||
| All Gaudi generations support bf16 natively. Only Gaudi 2 and Gaudi 3 support fp16. |
There was a problem hiding this comment.
fp16 is not supported on any Gaudi generation 😁
There was a problem hiding this comment.
Good catch! I'm going to take a look at it and open a PR in Transformers :)
updates the Hardware section of the docs for training: