Skip to content

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Apr 15, 2022

Bumps pytorch-lightning from 1.0.4 to 1.6.0.

Release notes

Sourced from pytorch-lightning's releases.

PyTorch Lightning 1.6: Support Intel's Habana Accelerator, New efficient DDP strategy (Bagua), Manual Fault-tolerance, Stability and Reliability.

The core team is excited to announce the PyTorch Lightning 1.6 release ⚡

Highlights

PyTorch Lightning 1.6 is the work of 99 contributors who have worked on features, bug-fixes, and documentation for a total of over 750 commits since 1.5. This is our most active release yet. Here are some highlights:

Introducing Intel's Habana Accelerator

Lightning 1.6 now supports the Habana® framework, which includes Gaudi® AI training processors. Their heterogeneous architecture includes a cluster of fully programmable Tensor Processing Cores (TPC) along with its associated development tools and libraries and a configurable Matrix Math engine.

You can leverage the Habana hardware to accelerate your Deep Learning training workloads simply by passing:

trainer = pl.Trainer(accelerator="hpu")
single Gaudi training
trainer = pl.Trainer(accelerator="hpu", devices=1)
distributed training with 8 Gaudi
trainer = pl.Trainer(accelerator="hpu", devices=8)

The Bagua Strategy

The Bagua Strategy is a deep learning acceleration framework that supports multiple, advanced distributed training algorithms with state-of-the-art system relaxation techniques. Enabling Bagua, which can be considerably faster than vanilla PyTorch DDP, is as simple as:

trainer = pl.Trainer(strategy="bagua")
or to choose a custom algorithm
trainer = pl.Trainer(strategy=BaguaStrategy(algorithm="gradient_allreduce")  # default

Towards stable Accelerator, Strategy, and Plugin APIs

The Accelerator, Strategy, and Plugin APIs are a core part of PyTorch Lightning. They're where all the distributed boilerplate lives, and we're constantly working to improve both them and the overall PyTorch Lightning platform experience.

In this release, we've made some large changes to achieve that goal. Not to worry, though! The only users affected by these changes are those who use custom implementations of Accelerator and Strategy (TrainingTypePlugin) as well as certain Plugins. In particular, we want to highlight the following changes:

  • All TrainingTypePlugins have been renamed to Strategy (#11120). Strategy is a more appropriate name because it encompasses more than simply training communcation. This change is now aligned with the changes we implemented in 1.5, which introduced the new strategy and devices flags to the Trainer.

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.6.0] - 2022-03-29

Added

  • Allow logging to an existing run ID in MLflow with MLFlowLogger (#12290)
  • Enable gradient accumulation using Horovod's backward_passes_per_step (#11911)
  • Add new DETAIL log level to provide useful logs for improving monitoring and debugging of batch jobs (#11008)
  • Added a flag SLURMEnvironment(auto_requeue=True|False) to control whether Lightning handles the requeuing (#10601)
  • Fault Tolerant Manual
    • Add _Stateful protocol to detect if classes are stateful (#10646)
    • Add _FaultTolerantMode enum used to track different supported fault tolerant modes (#10645)
    • Add a _rotate_worker_indices utility to reload the state according the latest worker (#10647)
    • Add stateful workers (#10674)
    • Add an utility to collect the states across processes (#10639)
    • Add logic to reload the states across data loading components (#10699)
    • Cleanup some fault tolerant utilities (#10703)
    • Enable Fault Tolerant Manual Training (#10707)
    • Broadcast the _terminate_gracefully to all processes and add support for DDP (#10638)
  • Added support for re-instantiation of custom (subclasses of) DataLoaders returned in the *_dataloader() methods, i.e., automatic replacement of samplers now works with custom types of DataLoader (#10680)
  • Added a function to validate if fault tolerant training is supported. (#10465)
  • Added a private callback to manage the creation and deletion of fault-tolerance checkpoints (#11862)
  • Show a better error message when a custom DataLoader implementation is not well implemented and we need to reconstruct it (#10719)
  • Show a better error message when frozen dataclass is used as a batch (#10927)
  • Save the Loop's state by default in the checkpoint (#10784)
  • Added Loop.replace to easily switch one loop for another (#10324)
  • Added support for --lr_scheduler=ReduceLROnPlateau to the LightningCLI (#10860)
  • Added LightningCLI.configure_optimizers to override the configure_optimizers return value (#10860)
  • Added LightningCLI(auto_registry) flag to register all subclasses of the registerable components automatically (#12108)
  • Added a warning that shows when max_epochs in the Trainer is not set (#10700)
  • Added support for returning a single Callback from LightningModule.configure_callbacks without wrapping it into a list (#11060)
  • Added console_kwargs for RichProgressBar to initialize inner Console (#10875)
  • Added support for shorthand notation to instantiate loggers with the LightningCLI (#11533)
  • Added a LOGGER_REGISTRY instance to register custom loggers to the LightningCLI (#11533)
  • Added info message when the Trainer arguments limit_*_batches, overfit_batches, or val_check_interval are set to 1 or 1.0 (#11950)
  • Added a PrecisionPlugin.teardown method (#10990)
  • Added LightningModule.lr_scheduler_step (#10249)
  • Added support for no pre-fetching to DataFetcher (#11606)
  • Added support for optimizer step progress tracking with manual optimization (#11848)
  • Return the output of the optimizer.step. This can be useful for LightningLite users, manual optimization users, or users overriding LightningModule.optimizer_step (#11711)
  • Teardown the active loop and strategy on exception (#11620)
  • Added a MisconfigurationException if user provided opt_idx in scheduler config doesn't match with actual optimizer index of its respective optimizer (#11247)
  • Added a loggers property to Trainer which returns a list of loggers provided by the user (#11683)
  • Added a loggers property to LightningModule which retrieves the loggers property from Trainer (#11683)
  • Added support for DDP when using a CombinedLoader for the training data (#11648)
  • Added a warning when using DistributedSampler during validation/testing (#11479)
  • Added support for Bagua training strategy (#11146)
  • Added support for manually returning a poptorch.DataLoader in a *_dataloader hook (#12116)
  • Added rank_zero module to centralize utilities (#11747)
  • Added a _Stateful support for LightningDataModule (#11637)
  • Added _Stateful support for PrecisionPlugin (#11638)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.0.4 to 1.6.0.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.0.4...1.6.0)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Apr 15, 2022
@chensuyue
Copy link
Contributor

Will fix in internal repo firstly.

@chensuyue chensuyue closed this Apr 19, 2022
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Apr 19, 2022

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

@dependabot dependabot bot deleted the dependabot/pip/examples/pytorch/nlp/huggingface_models/common/examples/research_projects/pplm/pytorch-lightning-1.6.0 branch April 19, 2022 14:49
VincyZhang pushed a commit that referenced this pull request Feb 12, 2023
xin3he added a commit that referenced this pull request Feb 14, 2025
…xt for llava models [llava-1.5-7b-hf] [llava-1.5-13b-hf ] (#54) (#77)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>
yiliu30 pushed a commit that referenced this pull request Feb 14, 2025
…xt for llava models [llava-1.5-7b-hf] [llava-1.5-13b-hf ] (#54) (#77)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>
XuehaoSun added a commit that referenced this pull request Feb 27, 2025
* [SW-210525] release HPU memory when loading neural_magic fp8 models (#48)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-211178] save generation_config when saving model if exists (#57)

* [SW-211178] save generation_config when saving model if exists

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-210543] update gitignore to simplify the git message (#50)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-205334][SW-187731] llama70b vLLM fix graph breaks with  torch.compile (#67)

* fix graph breaks with torch.compile

* remove orig_mod from helper_modules

* fix typos

* fix test_register_apis

---------

Co-authored-by: Rafal Litka <rlitka@habana.ai>

* [SW-213890] Disable test_two_step_layer_wise temporarily (#84)

* [SW-205437] - Support LM-HEAD patching (#79)

* [SW-205437] - Support LM-HEAD patching

* fix CR comments

* Enhance and rename fix_measurements tool to postprocessing_vllm_measurements (#82)

* [SW-214088] Fix graph break caused by PatchedMixtralMoE (#74)

* [SW-208528] Support FP8 per channel Q/DQ (#13)

* add per channel qdq support

Signed-off-by: changwang <changwang@habana.ai>

* improve ut

Signed-off-by: changwang <changwang@habana.ai>

* improve get_scale_dtype func and qdq init

Signed-off-by: changwangss <changwang@habana.ai>

* improve DequantOutput QuantInput init

Signed-off-by: changwangss <changwang@habana.ai>

* add scale_method improve PCQ

Signed-off-by: changwangss <changwang@habana.ai>

* remove scale name

Signed-off-by: changwangss <changwang@habana.ai>

* fix PCQ scale_inv expanding

Signed-off-by: changwangss <changwang@habana.ai>

* merge the qdq_per_channel, qdq_per_tensor to qdq

Signed-off-by: changwangss <changwang@habana.ai>

* move scale_inv change to the QuantInput init

Signed-off-by: changwangss <changwang@habana.ai>

* remove  scale_dtype list judge

Signed-off-by: changwangss <changwang@habana.ai>

* fix missing axis parameter

Signed-off-by: changwangss <changwang@habana.ai>

---------

Signed-off-by: changwang <changwang@habana.ai>
Signed-off-by: changwangss <changwang@habana.ai>

* [SW-204341] explicit scale format for ops (#73)

* [SW-204341] explicit scale format for ops

Added wrapper around fp8 functions

Wrapper decides which flavor of the function to call,
according to scale format

Helper modules call the wrapper

Decide which cast flavor to call,
according to scale format

* [SW-204341] Adjust softmax API , remove commented-out code

* [SW-204341] Fixes from CR 1

* [SW-204341] Fixed CR 2

* [SW-204341] add missing arg is fsdpa

Signed-off-by: Uri Livne <ulivne@habana.ai>

* [SW-204341] Enhance SDPA for measure and quant

* [SW-204341] remove sdpa quantized ops

* reland per op class with more enchancments

* [SW-204341] reland specfic arguments , rename class to wrapper

* added call with self in patched lm head

rebased on top of master next
force push

* fix mistake in conflict resolution

resotore MethodType fix

* antoher fix

* modified fp8 mtamul test to test quantized matmul func

* another fix of rebase mistake

* hopefully last rebase mistake fix

* restore backward compatibly import protection

---------

Signed-off-by: Uri Livne <ulivne@habana.ai>

* [SW-213890] Revert "[SW-213890] Disable test_two_step_layer_wise temporarily (#84)" (#86)

This reverts commit 27162ae.

* Revert "[SW-205334][SW-187731] llama70b vLLM fix graph breaks with  torch.com…" (#87)

This reverts commit 01a5734.

Co-authored-by: Danny Semiat <dsemiat@habana.ai>

* [ALGO-809] PatchedLmHeadLinearAllreduce: replacing the sharding code with the one from deepspeed-fork (#85)

Change-Id: Icb9670cfefdd1880c1ebb9a804a97c9ba79ecdc3

Co-authored-by: smarkovichgolan <smarkovich@habana.ai>

* fix bug of FusedMoE object has no attribute w13_weight (#94)

Signed-off-by: yuwenzho <yuwen.zhou@intel.com>

* [SW-208588] Add HPU fp8 Dynamic MOE (#88)

* [SW-208588] Add HPU fp8 Dynamic MOE

* fix review comments

* fix more review comments

* fix comments

* fix tests

* minor config fixes (#96)

* [SW-0] minor cosmetic fixes in quant_config

* remove hooks

* [SW-196641] - Fix type mismatch in linear quantization unit tests (#99)

* [SW-196641] - Fix type mismatch in linear quantization unit tests

* fix atol value

* add hp_dtype to fp8 config dict before parsing

* [SW-214785] Apply PatchedModuleBase for all existing PatchedModules (#92)

* [SW-214785] Apply PatchedModuleBase for all existing PatchedModules

Signed-off-by: Xin He <xinhe3@habana.ai>

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-215319] threshold of memory usage in test_block_wise.py is too tight (#100)

* [SW-215543] Revert "minor config fixes (#96)" (#104)

This reverts commit fa40142.

* fix RowParalleLinear func names from string to tuple (#106)

* [SW-215615] memory is unreleased during loading neural_magic models on multi-cards (#105)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-212423] RuntimeError when load the gptq model from HF (#70)

* [SW-212423] RuntimeError when load the gptq model from HF
* skip tie_word_embeddings=False

Signed-off-by: Xin He <xinhe3@habana.ai>

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-214785] fix issue when self._mod_extra_config is None (#108)

* [SW-211826] [example] demonstrate layer-wise, block-wise and lm_eval usage (#66)

* [SW-211826] [example] demonstrate layer-wise&block-wise usage to quantize LLM with limited host&device memory

Signed-off-by: Xin He <xinhe3@habana.ai>

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-215295] Force single object from quantized func wrapper classes (#103)

* [SW-215295] Force single object from quantized func wrapper classes

* Modify the factory object to be cleared after module patching

* Move cleanup to Quantizer object

* [SW-216292]Minor update for lm-eval (#113)

* Enable lm-eval 0.4.2 and expose `add_bos_token`

---------

Signed-off-by: Yi Liu <yiliu4@habana.ai>
Co-authored-by: Yi Liu <yiliu4@habana.ai>

* [SW-209207] add vllm fp8 dynamic MoE (#116)

* [SW-216239] Align Softmax fp8 scale calc with configuration (#112)

* [SW-217321] Skip auto round tests (#119) (#125)

* Test Commit

* [SW-217321] Skip auto round tests do to CI breakage

* remove uneeded print

* [SW-207451] Implement block-wise calibration for LLM (#24)

For LLMs, measurement on bf16 requires high hpu memory usage.
This change can help measure bf16 llama-405b on 8 Gaudi2 card, or measure llama-70b on 1 Gaudi card.
Shortage: cannot measure lm_head layer, maybe we can enhance it later.

---------

Signed-off-by: Xin <xin3.he@intel.com>
Co-authored-by: Xin He <xinhe3@habana.ai>
Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-197077] fix bug in output arbitrary scales (#45)

* [SW-197077] fix bug

* [SW-197077] fix bug in outputs arbitrary scales

Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-197077] fix bug in output arbitrary scales (#45)

* [SW-197077] fix bug

* [SW-197077] fix bug in outputs arbitrary scales

* [SW-210500] [Optimum-Habana] [Regression] [fp8] [INC] No generated text for llava models [llava-1.5-7b-hf] [llava-1.5-13b-hf ] (#54) (#77)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-213236] resolve CPU mem issue in CI (#76) (#83)

Cherry-pick from 1.19
Co-authored-by: Xin He <xin3.he@intel.com>

* [SW-213368] requirements_pt.txt: allow newer pydantic versions to >= 1.10.13 (#80)

* requirements_pt.txt: upgrade pydantic version to >= 2.0.0

* allow newer version of pydantic

newer deepspeed uses pydantic v2, which have slight different APIs.

* Update requirements_pt.txt

* [SW-212057] Enable scalar scale to support QDQ (#98)

* [SW-212057] Enable scalar scale to support QDQ

Change-Id: Ib5f5accd7a770675609e91c18bd04497b15937c5

* PR comment fixes

Change-Id: I01be41c29721b8d59c887f3d2b4e3cef8433331c
Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-215845] Run some unit tests from top level API (#109)

Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-212629] Support saving weight-only quantization INT4 model in Hugging Face format (#101)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>
Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-205970] update state_dict to save scalar scales (#6)

* update state_dict method in save/load function

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>
Signed-off-by: Xin He <xinhe3@habana.ai>

* Revert "[SW-205970] update state_dict to save scalar scales (#6)" (#114)

This reverts commit ffcb97e.

* [SW-212092] Save vllm compatible format (#102)

* save vllm compatible format

Signed-off-by: changwangss <changwang@habana.ai>

* add assertion and improve max_file_size to human reading

Signed-off-by: changwangss <changwang@habana.ai>

* support default the same with huggingface when saving

Signed-off-by: changwangss <changwang@habana.ai>

* separate save funtion for single device and multi devices.

Signed-off-by: changwangss <changwang@habana.ai>

* rebase

Signed-off-by: changwangss <changwang@habana.ai>

* rebase save

Signed-off-by: changwangss <changwang@habana.ai>

* remove weight and scale convert on G2

Signed-off-by: changwangss <changwang@habana.ai>

* rebase master_next due to revert #6

Signed-off-by: changwangss <changwang@habana.ai>

* improve convert weight to vllm compatable function

Signed-off-by: changwangss <changwang@habana.ai>

* replace print to logger

Signed-off-by: changwangss <changwang@habana.ai>

* move unit_mapping to common utils

Signed-off-by: changwangss <changwang@habana.ai>

---------

Signed-off-by: changwangss <changwang@habana.ai>
Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-205970] update state_dict to save scalar scales (#115)

* [SW-205970] update state_dict to save scalar scales (#6)

* update state_dict method in save/load function

* support mixtral
---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* [SW-215009] support loading per-channel scales (#95)

* [SW-215009] support loading per-channel scales

Signed-off-by: Xin He <xinhe3@habana.ai>

* fix UT

Signed-off-by: Xin He <xinhe3@habana.ai>

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* Refactoring scales (#22) (#122)

* Refactoring scales (#22)

* [SW-197077] refactoring maxabs scales and adding arbitrary scales.

* [SW-199696] Supporting Dynamic Quantization (#128)

* Calculating dynamic scales using nn.Modules

Change-Id: I8c344ae737803b39117037edaaa3d3b9cbd09f30

* [SW-199696] Supporting Dynamic Quantization

Change-Id: Ic5d6f04ec0b5032ac305e1b3097747c47250385b

* Code cleanup

Change-Id: I213bc7438e06bd1002775066bfb0dc6f10e8a84a

* Review changes and model print issue (circular dependency fix)

Change-Id: I5c41d2f9a937416ce260f55cb045c86858dd201a

* removed debug code from patching_common.py

* Round 2 + CI import issue

Change-Id: I27dbb33de8e027fb0b726336b38156b5d23a6896
Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-217334] enable fp8 qdq mode using PatchedModuleBase (#129)

* [SW-217334] enable fp8 qdq mode using PatchedModuleBase

* fix review commnets

* [SW-218871] fp8 multi-cards is not loaded correctly (#138)

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>

* Fix bug in mixtral unitscale (#141)

* [SW-218197] fix bug in Mixtral unitscale

* [SW-218197] fix bug in Mixtral unitscale

* update version to 3.3 for release

Signed-off-by: Xin He <xinhe3@habana.ai>

* [SW-20808] Make sure save&load format is an Enum object (#58)

* [SW-20808] Make sure save&load format is an Enum object

Signed-off-by: Xin He <xinhe3@habana.ai>

* Update save_load_entry.py

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Co-authored-by: Xin He <xinhe3@habana.ai>
Signed-off-by: Xin He <xinhe3@habana.ai>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add xfail for torchvision

Signed-off-by: Xin He <xinhe3@habana.ai>

* fix ILITV-3859

Signed-off-by: xin3he <xin3.he@intel.com>

* workaround for ILITV-3858

Signed-off-by: xin3he <xin3.he@intel.com>

* fix sdxl_smooth_quant

Signed-off-by: xin3he <xin3.he@intel.com>

* fix ILITV-3854

Signed-off-by: xin3he <xin3.he@intel.com>

---------

Signed-off-by: Xin He <xinhe3@habana.ai>
Signed-off-by: changwang <changwang@habana.ai>
Signed-off-by: changwangss <changwang@habana.ai>
Signed-off-by: Uri Livne <ulivne@habana.ai>
Signed-off-by: yuwenzho <yuwen.zhou@intel.com>
Signed-off-by: Yi Liu <yiliu4@habana.ai>
Signed-off-by: Xin <xin3.he@intel.com>
Signed-off-by: xin3he <xin3.he@intel.com>
Co-authored-by: Xin He <xinhe3@habana.ai>
Co-authored-by: RafLit <rafal.litka@intel.com>
Co-authored-by: Rafal Litka <rlitka@habana.ai>
Co-authored-by: Dany Kiazada <141814181+kiazada@users.noreply.github.com>
Co-authored-by: Nir David <124874956+nirda7@users.noreply.github.com>
Co-authored-by: Yuwen Zhou <yuwen.zhou@intel.com>
Co-authored-by: Wang, Chang <changwang@habana.ai>
Co-authored-by: Uri Livne <ulivne@habana.ai>
Co-authored-by: Oz Abramovich <oabramovich@habana.ai>
Co-authored-by: Dudi Lester <160421192+dudilester@users.noreply.github.com>
Co-authored-by: Danny Semiat <dsemiat@habana.ai>
Co-authored-by: smarkovichgolan <smarkovich@habana.ai>
Co-authored-by: Yi Liu <yi4.liu@intel.com>
Co-authored-by: Yi Liu <yiliu4@habana.ai>
Co-authored-by: Linoy Buchnik <linoybu@gmail.com>
Co-authored-by: Nadav Elyahu <88962733+nelyahu@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <suyue.chen@intel.com>
Co-authored-by: Sun, Xuehao <xuehao.sun@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant