Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken link in README.md for the 1Cycle doc. #60

Merged
merged 2 commits into from
Feb 11, 2020

Conversation

reddragon
Copy link
Contributor

Fixed a broken link in the main README.md.

@ShadenSmith
Copy link
Contributor

Thanks for this! We realized that the link should not be there anyway, as we want the link above it to go to the Advanced Parameter Search page where these things are discussed in more detail.

If you'd like to change the commit to instead drop the link, I'd be very happy to accept. Thanks again for catching the mistake!

@reddragon
Copy link
Contributor Author

Thanks Shaden, removed the link.

@jeffra jeffra self-requested a review February 11, 2020 03:52
@jeffra jeffra merged commit f78ccc0 into microsoft:master Feb 11, 2020
kouml pushed a commit to kouml/DeepSpeed that referenced this pull request Apr 3, 2020
* Fix broken link for the 1Cycle doc.

* Removed the 1Cycle link from README.md.
jeffra pushed a commit to jeffra/DeepSpeed that referenced this pull request Aug 28, 2020
* adding unit test/s for sparse transformer

* file-name change update

* updated tests based on new list of sparsity configs

* Adding/updating sparsity config (microsoft#68)

* adding/updating sparsity config patterns

* adding random to Variable sparsity

* fixing a typo

* applying comment adding missing argument docstring

* adding unit test/s for sparse transformer

* file-name change update

* updated tests based on new list of sparsity configs

* skipping a test if it is run on gpu with compute capability < 7; minimum V100
jeffra pushed a commit to jeffra/DeepSpeed that referenced this pull request Aug 28, 2020
* updating deepspeed config for Sparse Transformer

* Adding/updating sparsity config (microsoft#68)

* adding/updating sparsity config patterns

* adding random to Variable sparsity

* fixing a typo

* applying comment adding missing argument docstring

* updating deepspeed config for Sparse Transformer

* updating sparsity config for DeepSpeed parameter list

* adding unit test/s for sparse transformer (microsoft#60)

* adding unit test/s for sparse transformer

* file-name change update

* updated tests based on new list of sparsity configs

* Adding/updating sparsity config (microsoft#68)

* adding/updating sparsity config patterns

* adding random to Variable sparsity

* fixing a typo

* applying comment adding missing argument docstring

* adding unit test/s for sparse transformer

* file-name change update

* updated tests based on new list of sparsity configs

* skipping a test if it is run on gpu with compute capability < 7; minimum V100

* fix a naming issue in utils file: bert_mode -> bert (microsoft#69)

* updating deepspeed config for Sparse Transformer

* updating sparsity config for DeepSpeed parameter list
rraminen pushed a commit to rraminen/DeepSpeed that referenced this pull request Apr 28, 2021
microsoft#60)

* move optimizer to deepspeed side when deepspeed is enabled

* revert back adam optimizer changes

* change adam to adamw

* change adamw to adam in zero-2 config

* modify based on the new commit to DS
delock pushed a commit to delock/DeepSpeedSYCLSupport that referenced this pull request Sep 21, 2022
…t#60)

commit a8c119455d2ab3e58f9f6714f023fa0c0c0be817 (HEAD -> xpu-main, origin/xpu-main, origin/HEAD)
Author: Guo Yejun <yejun.guo@intel.com>
Date:   Thu Jul 21 01:18:39 2022 -0700

    add locate data

commit 79ee62539d76c3fefd5b1277b46077217eed71c7
Author: Liangliang-Ma <1906710196@qq.com>
Date:   Fri Aug 26 17:12:26 2022 +0800

    megatron with zero offload scripts (#6)

commit 38cb62ae4a8b468f27e3dddf929e3e53d302bb44
Author: Guo Yejun <yejun.guo@intel.com>
Date:   Fri Aug 19 08:32:34 2022 -0700

    ds_zero2_config_bf16.json&gpt-3.6b.sh: update bs per tile from 1 to 8

commit 65e6bb5e33d5a1244ce665b826f278244f5d0790
Author: Guo Yejun <yejun.guo@intel.com>
Date:   Tue Aug 2 20:49:38 2022 +0800

    ds_zero2_config_bf16.json: use the correct option flops_profiler (#4)
pengwa pushed a commit to pengwa/DeepSpeed that referenced this pull request Oct 14, 2022
* Added GPT pretraining distillation and quantization examples

* updated the compressor initialization API to the latest one

* fixed API calls 

* fixed several compatibility issues of Kd/quantization with respect to the standard gpt training in both checkpointing and tensorboard visualization. 

* dir name typo

* Incorporated Conglong's suggestions.

Co-authored-by: yaozhewei <zheweiy@berkeley.edu>
Co-authored-by: Conglong Li <conglong.li@gmail.com>
rraminen added a commit to rraminen/DeepSpeed that referenced this pull request May 12, 2023
rraminen added a commit to rraminen/DeepSpeed that referenced this pull request May 12, 2023
AdrianAbeyta pushed a commit to groenenboomj/DeepSpeed that referenced this pull request May 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants