Skip to content

Commit

Permalink
feat: DeepSpeed Autotune [MLG-201] (#6924)
Browse files Browse the repository at this point in the history
* Save model info; add Core API DeepSpeed example.

* extract ds profiler results

* Place activation mem into search metric.

* Obtain metrics from the model info file

* remove unused code

* adapted to passing dicts into report_completed

* cleanup and small changes

* refactoring and trial helper classes

* initial random search logic started

* minor changes

* minor cleanup

* use context manager, expanded base searcher

* remove ds_autotuning dir from examples

* minor edits

* readme updates and other cleanup

* bug fixes and a hack to avoid needing nested model dirs

* README updates

* Feat deepspeed autotune (#5875)

Added current Core API prototype.

* switched to triggering their autotuning in our trials

* remove checkpoint wrapper

* implementing checkpointing

* implementing checkpointing

* feat: allow includes in custom searcher experiment [MLG-338] (#6091)

* cleanups, bug fixing, and more examples

* adding native dsat tests

* cleanup

* minor edits

* fix missing is_chief

* Feat deepspeed autotune (#6159)

Trigger the native DS AT exit behavior for all trials.

* minor edits

* Feat deepspeed autotune git fixes (#6180)

deleted extraneous files

* readme fix and make cifar10 work off grenoble (#6187)

* Deepspeed Feature Branch - merge master (#6193)

* docs: Improvements to HPC launcher docs (#6042)

* Provide inline info about agent-specific scheduling options that do not apply to HPC Launcher
  configurations.
* Identify enroot-specific differences from docker (like for Singularity)
* Provide reference to custom resource pools as an option to deal with non-homogenous
  Slurm/PBS partitions.

* chore: Allow newer Node versions 17-19 (#6038)

* fix: k8s rm gives wrong slot count in rendezvous (#6044)

* chore: bump version: 0.19.12-dev0 -> 0.20.0-dev0 (#6048)

* chore: remove `applicableRoutespace` (#6040)

* chore: warning fixes in web (#6041)

* chore: fix warnings

* chore: change eslint rules

* chore: fix gpu nightly errors (#6046)

* chore: missing nodev18 (#6050)

* chore: add a dedicated exception for cli errors (#5649)

switch sys.exit calls in cli with a new user-facing exception.

* fix: SSO layout (#6053)

* chore: clean up UI kit (#6039)

* fix: lopsided training with 2,1 gpus (#6054)

There was a guard to skip local zmq setup when local_size < 2, but that
became no longer valid when local_size varied from worker to worker.

The result is one extra global allgather in some cases, no big deal.

* docs: add rbac ntsc & mr release notes (#6049)

* chore: manual bump version (#6058)

* ci: retry downloading GKE auth plugin [DET-8956] (#6056)

We got a failure due to a timeout on this, so let it retry a few times.

* docs: update Singularity known issues. (#6047)

* fix: Add #rank to worker segment instead of timestamp segment of Pytorch Profiler files [MLG-326] (#6037)

* Add pytorch profiler specific handling logic for appending rank to file name

* Change to use f-string

* fix only file name being passed in

* remove print statement

* fix: handle agent shutdown msg (#6065)

* chore: manually bump vite version (#6066)

* feat: Display better x-axis ticks on charts with time axis [WEB-849] (#6051)

* remove xTickValues from props now that it can be calculated internally

* test: add logging to a flaky test (#6068)

Test is flaky but hard to pin down, so add some prints for next time.

* fix: Unrelated models are shown in a workspace model registry tab (#6067)

* feat: Added task-based historical allocation endpoint [DET-8537] (#6015)

* fix: show `not found` and `spinner` properly (#6070)

* fix: show `not found` and `spinner` properly

* chore: change home redirect path

* fix: projectDetail page

* fix: project.workspaceId

* build(deps): bump golang.org/x/text from 0.3.5 to 0.3.8 in /proto (#6061)

* build(deps): bump golang.org/x/text from 0.3.5 to 0.3.8 in /proto

Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.5 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](golang/text@v0.3.5...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix: gpt-neox docker image and startup hook to work for non-provileged user (#6060)

* test: add locking around migration [DET-8957] (#6071)

In integration tests, multiple processes can attempt to run migrations
against the same database at once, which can lead to errors because
PostgreSQL's `CREATE TABLE IF NOT EXISTS` is not great with concurrency
(it allows for a time-of-check/time-of-use failure).

The specific errors we were seeing were conflicts in the pg_type table,
so the code now locks that table for the duration of the migration
transaction.

More information:
https://www.postgresql.org/message-id/CA+TgmoZAdYVtwBfp1FL2sMZbiHCWT4UPrzRLNnX1Nb30Ku3-gg@mail.gmail.com
https://stackoverflow.com/questions/29900845

* fix: logging inconsistent newlines in slurm (#6074)

* fix: checkpoint helper for points > 1000 and points > maxDatapoints (#6069)

* fix: replace migration table lock with advisory lock (#6077)

Taking a table lock sometimes runs into permissions issues; advisory
locking should avoid that.

Also, I realized the locking should probably be after the deferred
transaction close instead of before.

* build: check npm version on install (#6079)

This removes the `check-requirements` make target in the react folder
and replaces it with npm's native version check against the engines
property. This should make managing the node version slightly easier
because there's one less place to check.

* build: Apply webui lint fixes in precommit (#6078)

* allow linters to fix in precommit

This updates the web linters to automatically apply fixes when doing a
pre-commit check. This should ideally streamline the commit process to
reduce the amount of times the user needs to run prettier and eslint
before committing.

* tweak stylelint and eslint commands

* stage changed files

* type file_paths argument

* ci: adjust target accuracy for a test (#6085)

We got one failure [1] where the accuracy ended up just a hair below
0.83, so drop the target.

[1] https://app.circleci.com/pipelines/github/determined-ai/determined/33883/workflows/4a5d3257-6061-4f4d-bd66-096a580a5959/jobs/1194282/steps

* chore: UserBadge moves into design kit (#6086)

* chore: Remove chart feature flag, remove unused code [WEB-930] (#6064)

* tooltipsPlugin and TrialDetailsOverview alternates go into place
* checkpoint helper for points > 1000 and points > maxDatapoints
* move former LearningCurveChart into TrialsComparison
* sync up with #6069 changes

* fix: replace defaultvalue with initialValue (#6076)

* fix: Dont suggest moving model into its current workspace (#6088)

* ci: delete database at beginning of det deploy tests [DET-8937] (#6089)

Previously, the database was being retained between tests, sometimes
causing tests to fail when extra agents appeared due to agent
reattach. The tests should generally be independent anyway, so reset
the database (by default, with an option to disable) each time the
cluster or master comes up.

* feat: add Facepile component (#6081)

* fix: pre-commit web bug fix (#6090)

* ci: make GKE test jobs run serially (#6096)

We keep hitting GKE GPU quotas; this will probably help with that.

* fix: GPU counting for k8s cluster info page (#6094)

* fix: test-e2e-gke-parallel use t4 (#6093)

* ci: retry protoc download (#6095)

We got an incorrect file downloaded one time [1], so retry this
download, like in 2906257 (#5996).

[1] https://app.circleci.com/pipelines/github/determined-ai/determined/34074/workflows/e48681f8-8b75-4349-82eb-06e922d8bfcb/jobs/1202610

* refactor: add Card to UI Kit [WEB-818] (#5893)

* docs: Launcher doc improvements (#6099)

- Generalize journalctl command example --since option to work on Unbutu.
- Clarify user_name/group_name account requirements.

* feat: Attend to TODOS accross the code base (#6087)

* perf: tweak metrics series query. (#6105)

* chore: race could cause run container to return a different error than expected [DET-8870] (#6092)

* chore: add more metadata to slurm logs (#6030)

* chore: remove `ExpCompareMetricNames`, `ExpCompareTrialsSample` endpoints. (#6106)

* docs: fix reported DataPoint label doc (#6107)

* fix: tolerate additional non-CPU, non-GPU quotas in k8s (#6109)

* fix: stop filtering of valid options to reflect build issues (#6116)

* fix: modal theme color (#6117)

* fix: add bgColor in trial comparison table (#6119)

* fix: browser console warnings (#6122)

* fix: browser console warnings

* fix: remove spread operator

* chore: UIKit Pivot renaming (#6120)

* fix: correct minor JSX syntax (#6126)

* docs: add myst_parser extension (#6127)

We would like to support markdown-format documentation.  There are still
some kinks to be worked out with converting rst to myst files, but this
is a start.

* docs: fix some broken redirects (#6129)

* feat: add 3rd batch of TODO removals (#6115)

* feat: generic proxy configs [DET-8761] (#5978)

* build: [MLG-336] Limit the version of protobuf (#6134)

build: [MLG-336] Limit the version of protobuf

Installing the requirement `tensorflow-macos=2.8.0` pulls protobuf as a downstream dependency. Version 4.21 of the Python protobuf package had a breaking change that makes it incompatible with tensorflow-2.8.0 (see tensorflow/tensorflow#56077). Later patches to Tensorflow limit the version of protobuf to 3.20. We've got a work item to update the tensorflow we include, but until then this change gives the ceiling on tensorflow's protobuf dependency that its later versions enforce.

* chore: update detectron2 example to use v0.6 and reenable nightly test [MLG-301] (#6103)

* Run model in EventStorage context

* Use new Docker images

* Remove pytest.skip from test_detectron2_coco_pytorch_const

* Update README.md

* Minor code reduction

* Dockerfile (listed in .detignore)

* Use determinedai repo instead of a personal repo

* Makefile for building and publishing the Docker image

* docs: Bring content changes from docusaurus-ls beta (#6121)

* docs: Bring content changes from docusaurus-ls beta

Bring over content changes from the beta including reorganization changes.

* additional organizational edits

updating index pages, adding a top nav to welcome page

* added redirects

* revisions based on feedback

* rstfmt run

* feat: display workspace icon in ProjectCard (#6125)

* fix: checkpoint GC should set resource pool (#6136) [DET-9018]

* docs: bump rstfmt version (#6138)

* chore: add dev cli option to get auth token (#6008)

add a `curl` option to help with curling various endpoints

* build(deps): bump golang.org/x/net from 0.0.0-20210405180319-a5a99cb37ef4 to 0.7.0 in /proto (#6130)

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.0.0-20210405180319-a5a99cb37ef4 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/commits/v0.7.0)

* perf: Improved performance of historical allocation task endpoint, removed training/validation times (#6135)

* fix: FOUNDENG-438 Podman tests from the gate are breaking znodes again (#6146)

* chore: add Toggle component to UI Kit [WEB-841] (#6144)

* chore: Add tags to UI kit [WEB-816] (#6100)

* chore: Move SelectFilter into kit folder and update it [WEB-843] (#6102)

* fix: replace `InlineEditor` with UIKit input (#6082)

* fix: replace `InlineEditor`

* fix: add modal for experiment name

* fix: layout of settings page

* fix: setting page

* fix: minor changes

* feat: move experiment `description` and `tags` into edit modal

* chore: add `N/A` when description is empty in experiment detail

* fix: value bug

* fix: revert tag; remove tag from edit modal due to design inconsistancy

* chore:  add/test pt-only images and bumpenvs (#6097)

* add pt images to some unit tests

* add pt-images to circleci config

* run bumpenvs procedure

* fix test function signatures and linting

* fix warnings linting

* fix docs

* expand unit tests coverage

* build(deps): bump github.com/prometheus/client_golang from 1.10.0 to 1.11.1 in /master (#6004)

Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.10.0 to 1.11.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](prometheus/client_golang@v1.10.0...v1.11.1)

* feat: add det.import_from_path (#5737)

import_from_path allows users to import arbitrary code from a
checkpoint, even if the modules in the checkpoint have the same name as
modules they already have imported, but contain different code.

This is common when importing, for example, an old model_def.py that has
been updated since the original checkpoint was saved.

* fix: post rank_id correctly for fluent-less logging (#6151) [DET-8999]

* ci: send Slack notification on GKE node pool creation failure (#6152)

In order to prevent quota failures from showing up as CI failures, this
makes node pool creation failure send a Slack notification and mark the
job as successful.

I couldn't figure out how to use the Slack orb while distinguishing this
particular situation from the general failure case, so I just slapped in
a direct request to the already-configured Slack webhook for sending
messages to #ci-bots.

`circleci-agent step halt` marks the job as successful, which is why we
want a notification at all. For some reason, CircleCI fails to provide
an equivalent for marking the job as canceled or some other state
besides success/failure; we could make a call to the CircleCI API to
cancel the current job, but that would rely on having a CircleCI token
available, which we're trying to get away from.

* chore: drop unused columns from `raw_steps`, `raw_validations`, and `raw_checkpoints`. (#6110)

* fix: render spinner while auth check pending (#6098)

* chore: update hpc-launching-architecture doc - add default slurm option --no-requeue (#6141)

* docs: Content updates (#6154)

formatted the setup cluster table to match the approved version in the docusaurus ls beta

* feat: display user id in `det user list`. (#6156)

* fix: Additional tables get experiment- / workspace-specific storagePath [WEB-962] (#6128)

* fix: Additional tables get experiment- and workspace-specific storagePath

* useMemo

* fix: selection width in `move experiment` modal (#6149)

* fix: selection width in `move experiment` modal

* fix: add form wrapping

* chore: remove change

* feat: show trial hyperparameters for custom searchers [MLG-343] (#6162)

* feat: show trial hyperparameters for custom searchers [MLG-343]

* fix: corrected timestamp handling to do an interval overlap instead of contains (#6164)

* chore:  add release notes (#6167)

* chore: add release notes

* format with rstfmt

* chore: suppress help output for det dev (#6145)

avoid showing the `dev` option in `det -h` output

* chore: lock api state for backward compatibility check

* chore: bump version: 0.20.0-dev0 -> 0.20.1-dev0

* fix: separate Router and authCheck (#6170)

* fix: useMemo does not depend on trial having been loaded (#6173)

* chore: pass Labels/project/workspace to TaskSpec (#6172)

* refactor: replace user store with observables [WEB-799] (#6140)

* fix: hide Foldable menu options when button is visible (#6178)

This fixes an issue where, when using a `PageHeaderFoldable` component,
options that appear in the header always appear in the overflow menu.

* feat: add labels to GCP instances created with det deploy gcp [MLG-170] (#6147)

* feat: add labels to GCP instances created with det deploy gcp

* Changes to mimic det deploy aws --add-tags

* Add labels to other resources as well

* revert: reflag new chart experience (#6181)

* build: eliminate java dependency for typescript swagger bindings (#6139)

* fix: Close expiriment fork/ continue trial modal properly (#6174)

* fix: Continue Trial flow does not take the new `max_length` (#6168)

* fix: pass workspace ID when creating tensor board from WebUI [WEB-1019] (#6186)

* fix: don't print ':' when err msg is empty (#6190)

* fix: exp move modal (#6183)

* fix: exp move modal

* fix: minor fixes

* fix: add `archived` param and simplify query (#6175)

* fix: add `archived` param and simplify query

* chore: indent

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jerry J. Harrow <84593277+jerryharrow@users.noreply.github.com>
Co-authored-by: Nick Doiron <nick.doiron@hpe.com>
Co-authored-by: NicholasBlaskey <nick.blaskey@hpe.com>
Co-authored-by: Eric <31023784+eecsliu@users.noreply.github.com>
Co-authored-by: Keita Nonaka <keita.nonaka@hpe.com>
Co-authored-by: Erik Wilson <erik.wilson@hpe.com>
Co-authored-by: Hamid Zare <12127420+hamidzr@users.noreply.github.com>
Co-authored-by: johnkim-det <97752292+johnkim-det@users.noreply.github.com>
Co-authored-by: Ryan <rb@hpe.com>
Co-authored-by: Danny Zhu <dzhu@hpe.com>
Co-authored-by: CanmingCobble <107056780+CanmingCobble@users.noreply.github.com>
Co-authored-by: szewaiyuen6 <sze-wai.yuen@hpe.com>
Co-authored-by: Bradley Laney <bradley.laney@hpe.com>
Co-authored-by: julian-determined-ai <103522725+julian-determined-ai@users.noreply.github.com>
Co-authored-by: Corban Beaird <corban.beaird@hpe.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: liamcli <liam@determined.ai>
Co-authored-by: Ashton G <ashton.galloway@hpe.com>
Co-authored-by: thiagodallacqua-hpe <104855841+thiagodallacqua-hpe@users.noreply.github.com>
Co-authored-by: Max <max.russell@hpe.com>
Co-authored-by: Maksim <maksim.kouznetsov@hpe.com>
Co-authored-by: Emily <15078396+EmilyBonar@users.noreply.github.com>
Co-authored-by: Ilia Glazkov <ilia.glazkov@hpe.com>
Co-authored-by: Caleb Hoyoul Kang <caleb.kang@hpe.com>
Co-authored-by: Wes Turner <wesley.turner@hpe.com>
Co-authored-by: Daniel R. Hunter <103537968+drh-determined-ai@users.noreply.github.com>
Co-authored-by: Tara Charter <tara.charter@hpe.com>
Co-authored-by: rcorujo <90728398+rcorujo@users.noreply.github.com>
Co-authored-by: Guangqing Tang <40620519+gt2345@users.noreply.github.com>
Co-authored-by: MikhailKardash <mikhail.kardash@hpe.com>
Co-authored-by: Jagadeesh Madagundi <jagadeesh545@gmail.com>
Co-authored-by: gt2345 <gt2345@columbia.edu>
Co-authored-by: Trent Watson <trent.watson@hpe.com>

* handle add user

* Revert "Deepspeed Feature Branch - merge master (#6193)"

This reverts commit 05ee6a2.

* Move dsat into harness (#6254)

move dsat into harness

* fixed too-large dataset bug (#6262)

* MLG-337 (#6282)

 ds_config.json centered workflow and cleanup

* Feat deepspeed autotune fix merge conflict (#6289)

Merging with master, fixing a merge conflict, and minor cleanup.

* reset webui to latest master (#6291)

* linting fixes (#6296)

* Refactor DS AT for Trial Compatibility (#6307)

Refactored the searcher to use a json based config workflow with overwrite_deepspeed_args, as in (most of) the official examples.

* fix: remove all Close operations (#6383)

Refactored to remove all Close operations to avoid race condition errors. Also added quick no op example and fixed other bugs.

* restore util.py

* restoring more files to lastest master version

* merge in util.py changes

* Remove OOM catcher.

* Add Close operations back in

* Standardize autotuning config names.

* General clean up

* Fix dsat reporting bug.

* Minor changes.

* Clean up.

* Try/except hack around dead agent due to exit

* Minor changes.

* feat: allow users to specify zero optim search space and runner config (#6452)

* fix: do not merge user zero search config with defaults (#6464)

* populate the custom searcher logs with the correct event (#6470)

* fix: search state accounting (#6473)

* feat: add simple linear batch test searcher (#6482)

* fix: searcher refactor (#6513)

* deprecate the zero_search_config functionality (#6517)

* fix: timing fix and cleanup with tests (#6555)

* remove single should_stop and add more granular state

* adding the beginning of some autotuning tests

* real unit tests for DS AT

* Clean up the trial tracker properties, base searcher, and names

* Do not close twice

---------

Co-authored-by: Taylor Ritenour <taylor.ritenour@hpe.com>

* fix: minor cleanup (#6558)

* feat: deepspeed autotune trial based methods (#6575)

* fix: minor bug broke Trial class DS AT (#6613)

* feat: optional steps completed in context manager (#6619)

Make steps_completed in reporting context optional

Update examples to use updated context manager

minor example changes

Minor model changes

Delete old trial class examples

Add torchvision model trial example

* MLG-369: Some initial tests for DS AT (#6551)

* real unit tests for DS AT

* clean up and standardize the tests more

* also send a Close operation

* touch up the tests

* clarifications and clean up

* feat: deepspeed autotune cli args (#6643)

* Add __main__ and reuse original args on-cluster.

* Cleanup

* Adding include

* Finish adding include

* more args

* add search runner exp_id to follow on exp description

* minor comments

* Different registering system and starting the queue

* more args and cleanup

* About to use a queue (kind of)

* Attempting to enforce trial constraints

* Add early stopping as arg

* Remove old autotuning args and get them instead from cli

* Corrected args bugs

* Fixed many refactoring bugs

* Cleanup and bug fixes

* Move exp_conf edits to _run_dsat

* Add zero stages arg

* Move exp conf changes out of searcher and use zero_stages arg

* Fix simple searcher bug

* minor cleanup

* cleanup

* Starting refactor away from searcher state

* Changed searcher state computation to trial tracker

* Revert to single configs everywhere

* Update readme

* Edit added description text

* Add closed attr to Trials and bug fixes

* clarifying comment

* Rebase onto latest feature branch

* Add actual deque, fix bugs

* Remove print test

* small trial example fix

* Clean up try/except hack

* config edits

* feat: visualize cli args (#6651)

* Add CLI args to config hparams for visualization

* remove pickle path arg

* fix up the dsat tests (#6664)

* fix up the dsat tests

* make sure to pass through the args parsing function

* touch up the tests so that they can issue a failure to the experiment (#6676)

* chore: update deepspeed to 0.8.3 [MLG-399] (#6685)

* feat: hf trainer examples (#6687)

* chore: refactor dsat module to be independent of deepspeed imports [MLG-499] (#6694)

* chore: refactor dsat module to be independent of deepspeed imports [MLG-499]

* also update the test and det_callback.py

* chore: move over the examples for DSAT [MLG-500] (#6717)

* feat: add deepspeed autotuning examples [MLG-500]

* some clean up and UX improvements

* remove double parens, make sure that orchestrator id is on the far left

* feat: add follow on exp option (#6720)

* feat: migrate the huggingface det_callback [MLG-487] (#6724)

* migrating the det_callback

* feat: migrate the huggingface det_callback [MLG-487]

* don't export DetCallback through the top level integrations

* feat: minor test updates (#6746)

* feat: use lock with json overwrite for hf (#6742)

* feat: use lock with json overwrite for hf

* handle merge with our previous DetCallback refactors

---------

Co-authored-by: Garrett Goon <garrett.goon@hpe.com>

* feat: basic trial tracker tests (#6754)

* fix: hf overwrite bug

* fix: remove old import (#6761)

* feat: adding e2e tests for DSAT, enabling searcher to shutdown client experiment [MLG-369] (#6781)

* migrating the det_callback

* feat: migrate the huggingface det_callback [MLG-487]

* wip working on the e2e tests

* getting the basics of the tests running. Still appear to be issues though

* fixing up the tests

* fixing up tests

* handle cases where explicit exceptions are raised in the dsat search runner

* clean up for merging

* revert restarts change

* feat: add search method tests (#6785)

* add search method tests

* update hf ex readme

* quick fix for the unit tests (#6796)

* feat: write best ds config json to checkpoint (#6787)

* feat: refactor argparse into subparsers (#6801)

* feat: add binary search (#6806)

* fix: move lock to fix hf overwrites (#6828)

* fix: small fixes (#6825)

* expand message for model profile info failure

* correct the progress calculation

* Remove autotuning section from checkpointed best configs

* also write the best metrics to the checkpoint

* fix: proper placement of start/end profile step (#6834)

* feat: more random search test (#6837)

* chore: stabilize static typed python [MLG-498] (#6846)

* flake8 fixes

* mypy issues

* updates according to comments for mypy changes (#6864)

* feat: asha (#6852)

* merged in prev asha code

* starting asha tests

* more tests and cleanup

* test cleanup

* asha params closer to current nomenclature

* refactor asha args

* import fix

* mypy

* add asha to __all__

* add search data to stage 3 test

* chore: move hf examples (#6871)

* replace old hf integrations examples with new ones

* fix no-op bug in hf helper function

* fix helper function imports

* update readme

* use the python module for `searcher` directly (#6883)

* use the python module for `searcher` directly rather than importing the individual elements

* additional fixes

* feat: clean up dsat examples (#6891)

* moving files

* moving more files

* more file movement

* stage 1 in config

* core api script cleanup

* deepspeed.yaml core api cleanup

* align deepspeed.yaml files

* align ds_config.json files

* remove lr scheduler

* shorten length to 100

* model_def.py cleanup

* Added checkpointing and better metrics reporting

* cleaned up readme

* change example dir name

* add torchvision examples to test_official.py

* starting e2e tests

* add all e2e tests

* remove accidental double test

* chore: to do cleanup (#6895)

* cli docs

* cli doc strings

* remove todo comment

* update doc strings in dsat _utils.py

* searcher class doc strings

* More search method doc strings for non-public classes

* remove searcher state tests

* remove many todos in _dsat_search_method.py

* remove todos elsewhere

* fix: do not schedule the same trial twice (#6896)

* attempting to fix tests (#6899)

* fix: move examples dir one level up and finish docs/Makefile changes (#6898)

* fix: remove improper test_official.py tests (#6900)

* fix docs formatting (#6902)

* fix docs formatting

* add deepspeed autotune directory to example builds

* support hf examples (#6910)

* feat: ds config from include (#6905)

* move overwrite_deepspeed_config back to det.pytorch.deepspeed

* allow for the ds json to be --include'd

* self.hparams -> hparams bug

* doc string edits

* move ds_config.json back inside of no_op

* chore: update the custom searcher docs [MLG-447] (#6934)

* updating the docs for custom searcher wip

* wip, fixing up the docs, making sure things and clean and link properly

* chore: update the custom searcher docs [MLG-447]

* updates according to comments

* fix docs build

* changes according to comments in dsat branch (#6943)

* changes according to comments.

- removing no_op
- removing cache_dir in hf examples
- removing erroneous release-notes

* revert the changes to the environments so we are in sync with bumpenvs

* adjust the huggingface versions to the current minor version

* update version

* bug: fully wrap hf JSON loading around a FileLock (#6950)

* feat: deepspeed autotune user guide (#6929)

* starting dsat user guide

* import cleanup in hf examples

* more editing

* starting to list cli options

* formatting

* git mv hf examples to make more descriptive dirs

* remove TODO

* incorporating feedback

* links to examples

* cleanup

* Update cli help

* general cli options cleanup

* incorporate taylor's second round of feedback

* incorporate tara's comments

* fix: no auto in hf cli (#6963)

* add int check to hf cli arg overwrite

* fix accidential trivial test

* new test ensuring auto not used in hf cli args

* add checks against copying "auto" to HF CLI entrypoint

* add link to the dsat user guide (#6964)

* add link to the dsat user guide

* updated wording

* add defaults to dsat cli help menu (#6970)

* feat: more tests and asha cleanup (#6966)

* account for asha early stopping in test

* clean up lineage_completed_rung

* more full mock experiment tests

* Only skip completed trials added to queue

* base rungs off latest, rather than root

* max trial computation cleanup

* only include curr_rung <= rung_idx trial in best computation

* promotion respects rung idx test

* test cleanup

* more test cleanup

* add test_get_best_trial_in_lineage

* doc string cleanup

* fix up test_full_experiment_reverse_ordered_results

* minor wording edit

* always pop off highest curr_rung asha trial next

* fix the doc builds, add a release-note (#6973)

* fix the doc builds, add a release-note

* update docs names

* make flake8 behave

* update by the pre-commit complaints

* fix: readme cleanup (#6974)

* touch up hf trainer readme

* shorten up and simplify torchvision readme

* feat: update defaults and small tweaks (#6975)

* trials_per_random_config 5

* max trials 64

* min binary search trials 3

* fix text by avoiding trivial search range

* remove should_discard function to avoid possible locking

* increase timeouts for mock tests

* mypy fix

* update ceiling computation for new random trials

* base the ceiling off the max mbs computation, not the midpoint

* schedule longer lineages first in asha

* update docs to reflect new defaults

* fmt examples

* make isort behave

* address some comments about logging levels and comments

* remote erroneous TODO

* small spelling mistake

* move the `determined.integrations.huggingface.DetCallback` to `determined.transformers.DetCallback`

* fix: comment and environment cleanup (#6988)

* explain try/except in search runner

* remove while true and comments in _deepspeed_trial.py

* fix all deepspeed yaml files

* remove step id comment

* more todo cleanup

* make sure the docs build again

* fix the names of the e2e tests and in README

* don't run so many e2e_tests for deepspeed

* reduce hf image class ds slots per trial (#6998)

* fix e2e tests

* fix the convergence tests

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Maksim Kouznetsov <maksim.kouznetsov@hpe.com>
Co-authored-by: Garrett Goon <garrett.goon@hpe.com>
Co-authored-by: Garrett Goon <44747910+garrett361@users.noreply.github.com>
Co-authored-by: Jerry J. Harrow <84593277+jerryharrow@users.noreply.github.com>
Co-authored-by: Nick Doiron <nick.doiron@hpe.com>
Co-authored-by: NicholasBlaskey <nick.blaskey@hpe.com>
Co-authored-by: Eric <31023784+eecsliu@users.noreply.github.com>
Co-authored-by: Keita Nonaka <keita.nonaka@hpe.com>
Co-authored-by: Erik Wilson <erik.wilson@hpe.com>
Co-authored-by: Hamid Zare <12127420+hamidzr@users.noreply.github.com>
Co-authored-by: johnkim-det <97752292+johnkim-det@users.noreply.github.com>
Co-authored-by: Ryan <rb@hpe.com>
Co-authored-by: Danny Zhu <dzhu@hpe.com>
Co-authored-by: CanmingCobble <107056780+CanmingCobble@users.noreply.github.com>
Co-authored-by: szewaiyuen6 <sze-wai.yuen@hpe.com>
Co-authored-by: Bradley Laney <bradley.laney@hpe.com>
Co-authored-by: julian-determined-ai <103522725+julian-determined-ai@users.noreply.github.com>
Co-authored-by: Corban Beaird <corban.beaird@hpe.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: liamcli <liam@determined.ai>
Co-authored-by: Ashton G <ashton.galloway@hpe.com>
Co-authored-by: thiagodallacqua-hpe <104855841+thiagodallacqua-hpe@users.noreply.github.com>
Co-authored-by: Max <max.russell@hpe.com>
Co-authored-by: Emily <15078396+EmilyBonar@users.noreply.github.com>
Co-authored-by: Ilia Glazkov <ilia.glazkov@hpe.com>
Co-authored-by: Caleb Hoyoul Kang <caleb.kang@hpe.com>
Co-authored-by: Wes Turner <wesley.turner@hpe.com>
Co-authored-by: Daniel R. Hunter <103537968+drh-determined-ai@users.noreply.github.com>
Co-authored-by: Tara Charter <tara.charter@hpe.com>
Co-authored-by: rcorujo <90728398+rcorujo@users.noreply.github.com>
Co-authored-by: Guangqing Tang <40620519+gt2345@users.noreply.github.com>
Co-authored-by: MikhailKardash <mikhail.kardash@hpe.com>
Co-authored-by: Jagadeesh Madagundi <jagadeesh545@gmail.com>
Co-authored-by: gt2345 <gt2345@columbia.edu>
Co-authored-by: Trent Watson <trent.watson@hpe.com>
Co-authored-by: Emily Bonar <emily.bonar@hpe.com>
  • Loading branch information
Show file tree
Hide file tree
Showing 77 changed files with 7,147 additions and 370 deletions.
23 changes: 23 additions & 0 deletions docs/example-solutions/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,29 @@ For an introduction to using the training API, please visit the Training APIs se
- CIFAR-10
- :download:`cifar10_cpu_offloading.tgz </examples/cifar10_cpu_offloading.tgz>`

********************
DeepSpeed Autotune
********************

.. list-table::
:header-rows: 1

- - Framework
- Dataset
- Filename

- - DeepSpeed (PyTorch)
- ImageNet (Generated)
- :download:`torchvision.tgz </examples/torchvision.tgz>`

- - HuggingFace (DeepSpeed/PyTorch)
- Beans (HuggingFace)
- :download:`hf_image_classification.tgz </examples/hf_image_classification.tgz>`

- - HuggingFace (DeepSpeed/PyTorch)
- WikiText (HuggingFace)
- :download:`hf_language_modeling.tgz </examples/hf_language_modeling.tgz>`

************************
HP Search Benchmarking
************************
Expand Down
2 changes: 2 additions & 0 deletions docs/model-dev-guide/apis-howto/api-core-ug.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _core-getting-started:

#####################
Core API User Guide
#####################
Expand Down
303 changes: 303 additions & 0 deletions docs/model-dev-guide/apis-howto/deepspeed/autotuning.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,303 @@
.. _deepspeed-autotuning:

################################
DeepSpeed Autotune: User Guide
################################

.. meta::
:description: This user guide demonstrates how to optimize DeepSpeed parameters in order to take full advantage of the user's hardware and model.

Getting the most out of DeepSpeed (DS) requires aligning the many DS parameters with the specific
properties of your hardware and model. Determined AI's DeepSpeed Autotune (``dsat``) helps to
optimize these settings through an easy-to-use API with very few changes required in user-code, as
we describe in the remainder of this user guide. ``dsat`` can be used with
:class:`~determined.pytorch.deepspeed.DeepSpeedTrial`, :ref:`Core API <core-getting-started>`, and
`HuggingFace Trainer <https://huggingface.co/docs/transformers/main_classes/trainer>`__.

**************
How it Works
**************

You do not need to create a special configuration file to use ``dsat``. Assuming you have DeepSpeed
code which already functions, autotuning is as easy as inserting one or two helper functions into
your code and modifying the launch command.

For instance, let's say your directory contains DeepSpeed code and a corresponding ``single`` trial
experiment configuration file ``deepspeed.yaml``. Then, after inserting a line or two of
``dsat``-specific code per the instructions in the following sections, launching the ``dsat``
experiments is as easy as replacing the usual experiment-launching command:

.. code::
det experiment create deepspeed.yaml .
with:

.. code::
python3 -m determined.pytorch.dsat asha deepspeed.yaml .
The above uses Determined AI's DeepSpeed Autotune with the ``asha`` algorithm, one of three
available search methods:

- ``asha``: Adaptively searches over randomly selected DeepSpeed configurations, allocating more
compute resources to well-performing configurations. See :ref:`this introduction to ASHA
<topic-guides_hp-tuning-det_adaptive-asha>` for more details.

- ``binary``: Performs a simple binary search over the batch size for randomly-generated DS
configurations.

- ``random``: Conducts a search over random DeepSpeed configurations with an aggressive
early-stopping criteria based on domain-knowledge of DeepSpeed and the search history.

DeepSpeed Autotune is built on top of Custom Searcher (see :ref:`topic-guides_hp-tuning-det_custom`)
which starts up two separate experiments:

- ``single`` Search Runner Experiment: This experiment coordinates and schedules the trials that
run the model code.
- ``custom`` Experiment: This experiment contains the trials referenced above whose results are
reported back to the search runner.

Initially, a profiling trial is created to gather information regarding the model and computational
resources. The search runner experiment takes this initial profiling information and creates a
series of trials to search for the DS settings which optimize ``FLOPS_per_gpu``, ``throughput``
(samples/second), or latency timing information. The results of all such trials can be viewed in the
``custom`` experiment above. The search is informed both by the initial profiling trial and the
results of each subsequent trial, all of whose results are fed back to the search runner.

*******************
User Code Changes
*******************

To use ``dsat`` with :class:`~determined.pytorch.deepspeed.DeepSpeedTrial`, Core API, and
HuggingFace Trainer, specific changes must be made to your user code. In the following sections, we
will describe specific use cases and the changes needed for each.

.. _using_deepspeed_trial:

DeepSpeedTrial
==============

Determined's DeepSpeed Autotune works by inserting DeepSpeed configuration options into the
``overwrite_deepspeed_args`` field of the ``hyperparameters`` dictionary which is seen by each
trial. To take advantage of ``dsat``, you simply need to incorporate these "overwrite" values into
your original configuration.

.. note::

For more information about ``DeepSpeedTrial``, see :ref:`deepspeed-api`.

To facilitate this process, you must add a ``deepspeed_config`` field under the ``hyperparameters``
section of your experiment. This field specifies the relative path to the DS ``json`` configuration
file (written following the `DeepSpeed documentation here
<https://www.deepspeed.ai/docs/config-json/>`_) and is how ``dsat`` is informed of your default
settings. For example, if your default DeepSpeed configuration is stored in ``ds_config.json`` at
the top-level of your model directory, your ``hyperparameters`` section should include:

.. code:: yaml
hyperparameters:
deepspeed_config: ds_config.json
Once this configuration is in place, accessing the appropriate settings dictionary for each trial is
straightforward. You can use the :func:`~determined.pytorch.dsat.get_ds_config_from_hparams` helper
function, which retrieves the configuration from the hyperparameters. You can then pass this
configuration to deepspeed.initialize as usual:

.. code:: python
from determined.pytorch.deepspeed import DeepSpeedTrial, DeepSpeedTrialContext
from determined.pytorch import dsat
class MyDeepSpeedTrial(DeepSpeedTrial):
def __init__(self, context: DeepSpeedTrialContext) -> None:
self.hparams = self.context.get_hparams()
config = dsat.get_ds_config_from_hparams(self.hparams)
model = ...
model_parameters= ...
model_engine, optimizer, train_loader, lr_scheduler = deepspeed.initialize(
model=model, model_parameters=model_parameters, config=config
)
Using Determined's DeepSpeed Autotune with a :class:`~determined.pytorch.deepspeed.DeepSpeedTrial`
instance requires no further changes to your user code.

For a complete example of how to use DeepSpeed Autotune with ``DeepSpeedTrial``, visit the
`Determined GitHub Repo
<https://github.com/determined-ai/determined/tree/master/examples/deepspeed_autotune/torchvision/deepspeed_trial>`__
and navigate to ``examples/deepspeed_autotune/torchvision/deepspeed_trial`` .

Core API
========

When using DeepSpeed Autotune with a Core API experiment, there is one additional change to be made
following the steps in the :ref:`using_deepspeed_trial` section above.

The ``forward``, ``backward``, and ``step`` methods of the ``DeepSpeedEngine`` class need to be
wrapped in the :func:`~determined.pytorch.dsat.dsat_reporting_context` context manager. This
addition ensures that the autotuning metrics from each trial are captured and reported back to the
Determined master.

Here is an example sketch of ``dsat`` code with Core API:

.. code:: python
for op in core_context.searcher.operations():
for (inputs, labels) in trainloader:
with dsat.dsat_reporting_context(core_context, op): # <-- The new code
outputs = model_engine(inputs)
loss = criterion(outputs, labels)
model_engine.backward(loss)
model_engine.step()
In this code snippet, ``core_context`` is the :class:`~determined.core.Context` instance which was
initialized with :func:`determined.core.init`. The context manager requires access to both
``core_context`` and the current :class:`~determined.core.SearcherOperation` instance (``op``) to
appropriately report results. Outside of a ``dsat`` context, ``dsat_reporting_context`` is a no-op,
so there is no need to remove the context manager after the ``dsat`` trials have completed.

For a complete example of how to use DeepSpeed Autotune with Core API, visit the `Determined GitHub
Repo
<https://github.com/determined-ai/determined/tree/master/examples/deepspeed_autotune/torchvision/core_api>`__
and navigate to ``examples/deepspeed_autotune/torchvision/core_api`` .

HuggingFace Trainer
===================

You can also use Determined's DeepSpeed Autotune with the HuggingFace (HF) Trainer and Determined's
:class:`~determined.transformers.DetCallback` callback object to optimize your DeepSpeed parameters.

Similar to the previous case (Core API), you need to add a ``deepspeed_config`` field to the
``hyperparameters`` section of your experiment configuration file, specifying the relative path to
the DS ``json`` config file.

Reporting results back to the Determined master requires both the ``dsat.dsat_reporting_context``
context manager and ``DetCallback``.

Furthermore, since ``dsat`` performs a search over different batch sizes and HuggingFace expects
parameters to be specified as command-line arguments, an additional helper function,
:func:`~determined.pytorch.dsat.get_hf_args_with_overwrites`, is needed to create consistent
HuggingFace arguments.

Here is an example code snippet from a HuggingFace Trainer script that contains key pieces of
relevant code:

.. code:: python
from determined.transformers import DetCallback
from determined.pytorch import dsat
from transformers import HfArgumentParser,Trainer, TrainingArguments,
hparams = self.context.get_hparams()
parser = HfArgumentParser(TrainingArguments)
args = sys.argv[1:]
args = dsat.get_hf_args_with_overwrites(args, hparams)
training_args = parser.parse_args_into_dataclasses(args, look_for_args_file=False)
det_callback = DetCallback(core_context, ...)
trainer = Trainer(args=training_args, ...)
with dsat.dsat_reporting_context(core_context, op=det_callback.current_op):
train_result = trainer.train(resume_from_checkpoint=checkpoint)
.. important::

- The ``dsat_reporting_context`` context manager shares the same initial
:class:`~determined.core.SearcherOperation` as the ``DetCallback`` instance through its
``op=det_callback.current_op`` argument.

- The entire ``train`` method of the HuggingFace trainer is wrapped in the
``dsat_reporting_context`` context manager.

To find examples that use DeepSpeed Autotune with HuggingFace Trainer, visit the `Determined GitHub
Repo <https://github.com/determined-ai/determined/tree/master/examples/hf_trainer_api>`__ and
navigate to ``examples/hf_trainer_api``.

******************
Advanced Options
******************

The command-line entrypoint to ``dsat`` has various available options, some of them
search-algorithm-specific. All available options for any given search method can be found through
the command:

.. code::
python3 -m determined.pytorch.dsat asha --help
and similar for the ``binary`` and ``random`` search methods.

Flags that are particularly important are detailed below.

General Options
===============

The following options are available for every search method.

- ``--max-trials``: The maximum number of trials to run. Default: ``64``.

- ``--max-concurrent-trials``: The maximum number of trials that can run concurrently. Default:
``16``.

- ``--max-slots``: The maximum number of slots that can be used concurrently. Defaults to ``None``,
i.e., there is no limit by default.

- ``--metric``: The metric to be optimized. Defaults to ``FLOPS-per-gpu``. Other available options
are ``throughput``, ``forward``, ``backward``, and ``latency``.

- ``--run-full-experiment``: If specified, after the ``dsat`` experiment has completed, a
``single`` experiment will be launched using the specifications in the ``deepspeed.yaml``
overwritten with the best-found DS configuration parameters.

- ``--zero-stages``: This flag allows the user to limit the search to a subset of the stages by
providing a space-separated list, as in ``--zero-stages 2 3``. Default: ``1 2 3``.

.. _asha-options:

``asha`` Options
================

The ``asha`` search algorithm randomly generates various DeepSpeed configurations and attempts to
tune the batch size for each configuration through a binary search. ``asha`` adaptively allocates
resources to explore each configuration (providing more resources to promising lineages) where the
resource is the number of steps taken in each binary search (i.e., the number of trials).

``asha`` can be configured with the following flags:

- ``--max-rungs``: The maximum total number of rungs to use in the ASHA algorithm. Larger values
allow for longer binary searches. Default: ``5``.

- ``--min-binary-search-trials``: The minimum number of trials to use for each binary search. The
``r`` parameter in `the ASHA paper <https://arxiv.org/abs/1810.05934>`_. Default: ``3``.

- ``--divisor``: Factor controlling the increased computational allotment across rungs, and the
decrease in their population size. The ``eta`` parameter in `the ASHA paper
<https://arxiv.org/abs/1810.05934>`_. Default: ``2``.

- ``--search_range_factor``: The inclusive, initial ``hi`` bound on the binary search is set by an
approximate computation (the ``lo`` bound is always initialized to ``1``). This parameter adjusts
the ``hi`` bound by a factor of ``search_range_factor``. Default: ``1.0``.

``binary`` Options
==================

The ``binary`` search algorithm performs a straightforward search over the batch size for a
collection of randomly-drawn DS configurations. A single option is available for this search:
``--search_range_factor``, which plays precisely the same role as in the :ref:`asha-options` section
above.

``random`` Options
==================

The ``random`` search algorithm performs a search over randomly drawn DS configurations and uses a
semi-random search over the batch size.

``random`` can be configured with the following flags:

- ``--trials_per_random_config``: The maximum batch size configuration which will tested for a
given DS configuration. Default: ``5``.

- ``--early-stopping``: If provided, the experiment will terminate if a new best-configuration has
not been found in the last ``early-stopping`` trials. Default: ``None``, corresponding to no such
early stopping.
6 changes: 5 additions & 1 deletion docs/model-dev-guide/apis-howto/deepspeed/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ Determined DeepSpeed documentation:
:class:`~determined.pytorch.PyTorchTrial` to
:class:`~determined.pytorch.deepspeed.DeepSpeedTrial`.

- :ref:`DeepSpeed Autotune: User Guide <deepspeed-autotuning>` demonstrates how to use DeepSpeed
Autotune to take full advantage of your hardware and model.

- :ref:`API Reference <deepspeed-reference>` lays out the classes and methods related to DeepSpeed
support including the full API specification for
:class:`~determined.pytorch.deepspeed.DeepSpeedTrial` and
Expand All @@ -40,6 +43,7 @@ Determined DeepSpeed documentation:
:maxdepth: 1
:hidden:

deepspeed
API Usage Guide <deepspeed>
Autotuning <autotuning>
advanced
pytorch2deepspeed
13 changes: 9 additions & 4 deletions docs/model-dev-guide/hyperparameter/search-methods/hp-custom.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,11 @@ To run the custom hyperparameter tuning algorithm, you can use:
- :class:`~determined.searcher.LocalSearchRunner` to run on your machine,
- :class:`~determined.searcher.RemoteSearchRunner` to run on a Determined cluster.

.. note::

Using :class:`~determined.searcher.RemoteSearchRunner` will create two experiments, with one
orchestrating the hyperparameter search of the other.

Both search runners execute the custom hyperparameter tuning algorithm and start a multi-trial
experiment on a Determined cluster.

Expand Down Expand Up @@ -89,9 +94,9 @@ look like the following ``run_local_searcher.py``:
To start the custom search method locally, you can use the following CLI command:

.. code:: python
.. code:: bash
python run_local_searcher.py
$ python run_local_searcher.py
****************************************
Run Hyperparameter Search on a Cluster
Expand Down Expand Up @@ -123,9 +128,9 @@ A script to run your custom search method on a Determined cluster may look like
To start the custom search method on a cluster, you need to submit it to the master as a
single-trial experiment. To this end, you can use the following CLI command:

.. code:: python
.. code:: bash
det e create searcher_config.yaml context_dir
$ det e create searcher_config.yaml context_dir
The custom search method runs on a Determined cluster as a single trial experiment. Configuration
for the search method experiment is specified in the ``searcher_config.yaml`` and may look like
Expand Down

0 comments on commit 23c2a5b

Please sign in to comment.