Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: adopt hubble sdk on notebook login #576

Merged
merged 25 commits into from
Oct 17, 2022
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
944e60c
feat: adopt hubble sdk on notebook login
bwanglzu Oct 14, 2022
6976db9
docs: add documentation
bwanglzu Oct 14, 2022
7736585
chore: add changelog
bwanglzu Oct 14, 2022
4f83d3d
docs: update docstring with notebook login
bwanglzu Oct 14, 2022
d83690f
docs: update docstring
bwanglzu Oct 14, 2022
f1b37de
docs: update docstring
bwanglzu Oct 14, 2022
4dabbd7
docs: update docstring
bwanglzu Oct 14, 2022
878d851
docs: update docstring
bwanglzu Oct 14, 2022
852cd38
docs: update docstring
bwanglzu Oct 14, 2022
91247fb
docs: update docstring
bwanglzu Oct 14, 2022
efd43cc
docs: update docstring
bwanglzu Oct 14, 2022
20ebad0
docs: update login success message
bwanglzu Oct 14, 2022
56fdaf4
Merge branch 'feat-notebook-login' of https://github.com/jina-ai/fine…
bwanglzu Oct 14, 2022
d59dfce
docs: upper case google and jupyter
bwanglzu Oct 14, 2022
517079a
docs: use upper case colab
bwanglzu Oct 14, 2022
a009e62
docs: fix tll training data prepare section
bwanglzu Oct 14, 2022
0a1d3c3
chore: add changelog
bwanglzu Oct 14, 2022
3901a1b
chore: freeze hubble sdk version
bwanglzu Oct 14, 2022
2e10245
fix: add force option to login
bwanglzu Oct 14, 2022
83c1e93
feat: init state at post init
bwanglzu Oct 16, 2022
d85e379
feat: remove unused code
bwanglzu Oct 16, 2022
1df4cc0
docs: fix dataset name in clip example
bwanglzu Oct 17, 2022
f9f1904
feat: add force to login
bwanglzu Oct 17, 2022
d4126e7
docs: fix clip training data name
bwanglzu Oct 17, 2022
6317f8c
test: add force to mocker
bwanglzu Oct 17, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Change CLIP fine-tuning example in the documentation ([#569](https://github.com/jina-ai/finetuner/pull/569))

- Use latest Hubble with `notebook_login` support ([#576](https://github.com/jina-ai/finetuner/pull/576))

### Fixed

- Bump flake8 to `5.0.4`. ([#568](https://github.com/jina-ai/finetuner/pull/568))
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ The following code snippet describes how to fine-tune ResNet50 on [Totally Looks
import finetuner
from finetuner.callback import EvaluationCallback

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook/Google Colab

run = finetuner.fit(
model='resnet50',
Expand All @@ -149,7 +149,7 @@ Fine-tuning might take 5 minute to finish. You can later re-connect your run wit
```python
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

run = finetuner.get_run('resnet50-tll-run')

Expand Down
2 changes: 1 addition & 1 deletion docs/_templates/template_ft_in_action.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ print(run.status())
"Since some runs might take up to several hours/days, you can reconnect to your run very easily to monitor its status and logs."
```python
import finetuner
finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab
run = finetuner.get_run('my_run')
```

Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/how-it-works.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ From an engineering perspective,
we have hidden all the complexity of machine learning algorithms and resource configuration (such as GPUs).
All you need to do is decide on your backbone model and prepare your training data.

Once you logged into the Jina Ecosystem with {meth}`~finetuner.login()`,
Once you have logged in to the Jina Ecosystem with {meth}`~finetuner.login()` or `~finetuner.notebook_login()`,
Finetuner will push your training data into our *Cloud Artifact Storage* (only visible to you).
At the same time, we will spin-up an isolated computational resource
with proper memory, CPU, GPU dedicated to your fine-tuning job.
Expand Down
6 changes: 3 additions & 3 deletions docs/tasks/image-to-image.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,10 @@ For this example, we're gonna go with `resnet50`.
## Fine-tuning
From now on, all the action happens in the cloud!

First you need to {ref}`login to Jina ecosystem <login-to-jina-ecosystem>`:
First, {ref}`log in to the Jina ecosystem <login-to-jina-ecosystem>`:
```python
import finetuner
finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab
```

Now, you can easily start a fine-tuning job with {meth}`~finetuner.fit`:
Expand Down Expand Up @@ -95,7 +95,7 @@ Since some runs might take up to several hours, it's important to know how to re

```python
import finetuner
finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

run = finetuner.get_run('resnet-tll')
```
Expand Down
4 changes: 2 additions & 2 deletions docs/tasks/text-to-image.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ From now on, all the action happens in the cloud!
First you need to {ref}`login to Jina ecosystem <login-to-jina-ecosystem>`:
```python
import finetuner
finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab
```

Now that everything's ready, let's create a fine-tuning run!
Expand Down Expand Up @@ -73,7 +73,7 @@ Since some runs might take up to several hours/days, it's important to know how

```python
import finetuner
finetuner.login()
finetuner.login() # use finetuner.notebook_login in jupyter notebook/google colab
run = finetuner.get_run('clip-fashion')
```

Expand Down
6 changes: 3 additions & 3 deletions docs/tasks/text-to-text.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ import finetuner
from finetuner.callback import EvaluationCallback

# Make sure to login to Jina Cloud
finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

# Start fine-tuning as a run within an experiment
run = finetuner.fit(
Expand Down Expand Up @@ -156,7 +156,7 @@ Since some runs might take up to several hours, you can reconnect to your run ve
```python
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab
run = finetuner.get_run('finetune-quora-dataset-bert-base-cased')
print(f'Run status: {run.status()}')
```
Expand All @@ -168,7 +168,7 @@ Our `EvaluationCallback` during fine-tuning ensures that after each epoch, an ev
```python
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab
run = finetuner.get_run('finetune-quora-dataset-bert-base-cased')
print(f'Run logs: {run.logs()}')
```
Expand Down
4 changes: 2 additions & 2 deletions docs/walkthrough/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import finetuner
from docarray import DocumentArray

# Login to Jina ecosystem
finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

# Prepare training data
train_data = DocumentArray(...)
Expand All @@ -43,7 +43,7 @@ run.save_artifact(directory='experiment')
You should see this in your terminal:

```bash
🔐 Successfully login to Jina ecosystem!
🔐 Successfully logged in to Jina AI as [USER NAME]!
Run name: vigilant-tereshkova
Run logs:

Expand Down
10 changes: 5 additions & 5 deletions docs/walkthrough/integrate-with-jina.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ To embed a [DocumentArray](https://docarray.jina.ai/) with a fine-tuned model, y
from docarray import DocumentArray, Document
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

token = finetuner.get_token()
run = finetuner.get_run(
Expand Down Expand Up @@ -76,7 +76,7 @@ You have three options:
import finetuner
from jina import Flow

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab()
bwanglzu marked this conversation as resolved.
Show resolved Hide resolved

token = finetuner.get_token()
run = finetuner.get_run(
Expand Down Expand Up @@ -172,7 +172,7 @@ To use those models, you have to provide the name of the model via an additional
from docarray import DocumentArray, Document
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

token = finetuner.get_token()
run = finetuner.get_run(
Expand All @@ -192,7 +192,7 @@ finetuner.encode(model=model, data=da)
from docarray import DocumentArray, Document
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

token = finetuner.get_token()
run = finetuner.get_run(
Expand All @@ -215,7 +215,7 @@ If you want to host the CLIP models, you also have to provide the name of the mo
import finetuner
from jina import Flow

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

token = finetuner.get_token()
run = finetuner.get_run(
Expand Down
8 changes: 4 additions & 4 deletions docs/walkthrough/login.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,20 @@
# Login

Since Finetuner leverages cloud resources for fine-tuning,
you are required to {meth}`~finetuner.login()` and obtain a token from Jina before starting a fine-tuning job.
you are required to {meth}`~finetuner.login()` (or `~finetuner.notebook_login()`) and obtain a token from Jina before starting a fine-tuning job.
It is as simple as:

```python
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab
```

A browser window should pop up with different login options.
After {meth}`~finetuner.login()` you will see this in your terminal:
After {meth}`~finetuner.login()` or `~finetuner.notebook_login()` you will see this in your terminal:

```bash
🔐 Successfully login to Jina ecosystem!
🔐 Successfully logged in to Jina AI as [USER NAME]!
```

```{admonition} Why do I need to login?
Expand Down
4 changes: 2 additions & 2 deletions docs/walkthrough/save-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ In the example below, we show how to connect to an existing run and download a t
```python
import finetuner

finetuner.login()
finetuner.login() # use finetuner.notebook_login() in Jupyter notebook or Google colab

# connect to the experiment we created previously.
experiment = finetuner.get_experiment('finetune-flickr-dataset')
Expand All @@ -40,7 +40,7 @@ If the fine-tuning finished,
you can see this in the terminal:

```bash
🔐 Successfully login to Jina Ecosystem!
🔐 Successfully logged in to Jina AI as [USER NAME]!
Run status: FINISHED
Run Artifact id: 62972acb5de25a53fdbfcecc
Run logs:
Expand Down
4 changes: 4 additions & 0 deletions finetuner/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@ def login():
ft.login()


def notebook_login():
ft.notebook_login()


def connect():
ft.connect()

Expand Down
11 changes: 11 additions & 0 deletions finetuner/finetuner.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,17 @@ def login(self):
hubble.login()
self._init_state()

def notebook_login(self, force: bool = False):
"""Log in to Hubble account, initialize a client object
and create a default experiment.

:param force: If set to true, overwrite token and re-login.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the force parameter also to the login function. It also exists there.


Note: This works for Jupyter notebook, Google Colab..
"""
hubble.notebook_login(force=force)
self._init_state()

def connect(self):
"""Connects finetuner to Hubble without logging in again.
Use this function, if you are already logged in.
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
install_requires=[
'docarray[common]>=0.13.31',
'finetuner-stubs==0.10.4',
'jina-hubble-sdk>=0.19.0',
'jina-hubble-sdk>=0.20.2',
],
extras_require={
'full': [
Expand Down