Skip to content

Commit

Permalink
update badges
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Jul 17, 2023
1 parent ff9858c commit 088ee37
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,27 +6,27 @@
[![PyPI Downloads](https://pepy.tech/badge/lightning-hivemind)](https://pepy.tech/project/lightning-hivemind)
[![Docs](https://github.com/Lightning-AI/lightning-Hivemind/actions/workflows/docs-deploy.yml/badge.svg?event=push)](https://lightning-ai.github.io/lightning-Hivemind/)

[![General checks](https://github.com/Lightning-AI/lightning-hivemind/actions/workflows/ci-checks.yml/badge.svg?event=push)](https://github.com/Lightning-AI/lightning-hivemind/actions/workflows/ci-checks.yml)
[![CI testing](https://github.com/Lightning-AI/lightning-hivemind/actions/workflows/ci-testing.yml/badge.svg?event=push)](https://github.com/Lightning-AI/lightning-hivemind/actions/workflows/ci-testing.yml)
[![Build Status](https://dev.azure.com/Lightning-AI/compatibility/_apis/build/status%2Fstrategies%2FLightning-Universe.lit-strategy-Hivemind?branchName=main)](https://dev.azure.com/Lightning-AI/compatibility/_build/latest?definitionId=62&branchName=main)
[![General checks](https://github.com/Lightning-Universe/lightning-Hivemind/actions/workflows/ci-checks.yml/badge.svg?event=push)](https://github.com/Lightning-Universe/lightning-Hivemind/actions/workflows/ci-checks.yml)
[![CI testing](https://github.com/Lightning-Universe/lightning-Hivemind/actions/workflows/ci-testing.yml/badge.svg?event=push)](https://github.com/Lightning-Universe/lightning-Hivemind/actions/workflows/ci-testing.yml)
[![Build Status](https://dev.azure.com/Lightning-AI/compatibility/_apis/build/status%2Fstrategies%2FLightning-Universe.lightning-Hivemind?branchName=main)](https://dev.azure.com/Lightning-AI/compatibility/_build/latest?definitionId=64&branchName=main)
[![pre-commit status](https://results.pre-commit.ci/badge/github/Lightning-AI/lightning-Hivemind/main.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/lightning-Hivemind/main)

Collaborative Training tries to solve the need for top-tier multi-GPU servers by allowing you to train across unreliable machines,
such as local machines or even preemptible cloud compute across the internet.
such as local machines or even preemptible cloud computing across the internet.

Under the hood, we use [Hivemind](https://github.com/learning-at-home/hivemind) which provides de-centralized training across the internet.
Under the hood, we use [Hivemind](https://github.com/learning-at-home/hivemind), which provides de-centralized training across the internet.

To use Collaborative Training, you need to first this extension.
To use Collaborative Training, you need first to have this extension.

```bash
pip install -U lightning-Hivemind
```

The `HivemindStrategy` accumulates gradients from all processes that are collaborating until they reach a `target_batch_size`. By default, we use the batch size
The `HivemindStrategy` accumulates gradients from all collaborating processes until they reach a `target_batch_size`. By default, we use the batch size
of the first batch to determine what each local machine batch contributes towards the `target_batch_size`. Once the `target_batch_size` is reached, an optimizer step
is made on all processes.

When using `HivemindStrategy` note that you cannot use gradient accumulation (`accumulate_grad_batches`). This is because Hivemind manages accumulation internally.
When using `HivemindStrategy`, note that you cannot use gradient accumulation (`accumulate_grad_batches`). This is because Hivemind manages accumulation internally.

```py
from lightning import Trainer
Expand All @@ -39,10 +39,10 @@ Followed by:

```bash
python train.py
# Other machines can connect running the same command:
# Other machines can connect by running the same command:
# INITIAL_PEERS=... python train.py
# or passing the peers to the strategy:"
# HivemindStrategy(initial_peers=...)"
```

A helper message is printed once your training begins, which shows you how to start training on other machines using the same code.
A helper message is printed once your training begins, showing you how to train on other machines using the same code.

0 comments on commit 088ee37

Please sign in to comment.