Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: clip as external executor #74

Merged
merged 13 commits into from
Aug 8, 2022
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# OS
.DS_Store

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down Expand Up @@ -127,3 +130,6 @@ dmypy.json

# Pyre type checker
.pyre/

# Jina
.jina/
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ RUN apt-get update \
RUN if [ -n "${APT_PACKAGES}" ]; then apt-get update && apt-get install --no-install-recommends -y ${APT_PACKAGES}; fi && \
git clone --depth=1 https://github.com/JingyunLiang/SwinIR.git && \
git clone --depth=1 https://github.com/CompVis/latent-diffusion.git && \
git clone --depth=1 https://github.com/hanxiao/glid-3-xl.git && \
git clone --depth=1 -b pref-remove-clip-as-service https://github.com/jina-ai/glid-3-xl.git && \
pip install jax[cuda11_cudnn82]==0.3.13 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html && \
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 && \
cd latent-diffusion && pip install --timeout=1000 -e . && cd - && \
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@ The 16 candidates are sorted by [CLIP-as-service](https://github.com/jina-ai/cli
```python
fav_id = 3
fav = da[fav_id]
fav.embedding = da.embedding
fav.display()
```

Expand Down Expand Up @@ -243,7 +244,7 @@ mkdir dalle && cd dalle
git clone https://github.com/jina-ai/dalle-flow.git
git clone https://github.com/JingyunLiang/SwinIR.git
git clone https://github.com/CompVis/latent-diffusion.git
git clone https://github.com/hanxiao/glid-3-xl.git
git clone https://github.com/jina-ai/glid-3-xl.git
```

You should have the following folder structure:
Expand Down
1 change: 1 addition & 0 deletions client.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,7 @@
"fav_id = 3\n",
"\n",
"fav = da[fav_id]\n",
"fav.embedding = da.embedding\n",
"\n",
"fav.display()"
]
Expand Down
2 changes: 1 addition & 1 deletion executors/glid3/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ async def run_glid3(self, d: Document, text: str, skip_rate: float, num_images:
from dalle_flow_glid3.sample import do_run

args = parser.parse_args(kw_str_list)
await do_run(args)
await do_run(args, d.embedding)

kw.update({
'generator': 'GLID3-XL',
Expand Down
13 changes: 0 additions & 13 deletions executors/rerank/executor.py

This file was deleted.

21 changes: 15 additions & 6 deletions flow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,13 @@ executors:
CUDA_VISIBLE_DEVICES: 0 # change this if you have multiple GPU
XLA_PYTHON_CLIENT_ALLOCATOR: platform # https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html
replicas: 1 # change this if you have larger VRAM
- name: clip_encoder
uses: jinahub+docker://CLIPTorchEncoder/latest
host: 'demo-cas.jina.ai'
port: 2096
tls: true
external: true
needs: [gateway]
- name: diffusion
uses: GLID3Diffusion
uses_with:
Expand All @@ -24,13 +31,15 @@ executors:
CUDA_VISIBLE_DEVICES: 0 # change this if you have multiple GPU
XLA_PYTHON_CLIENT_ALLOCATOR: platform # https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html
replicas: 1 # change this if you have larger VRAM
needs: [gateway]
needs: [clip_encoder]
- name: rerank
uses: ReRank
uses_with:
clip_server: grpcs://demo-cas.jina.ai:2096
py_modules:
- executors/rerank/executor.py
uses: jinahub+docker://CLIPTorchEncoder/latest
JoanFM marked this conversation as resolved.
Show resolved Hide resolved
host: 'demo-cas.jina.ai'
port: 2096
uses_requests:
'/': rank
tls: true
external: true
needs: [dalle, diffusion]
- name: upscaler
uses: SwinIRUpscaler
Expand Down
1 change: 0 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# jina-related
jina>=3.4.5
clip_client>=0.4.20
docarray>=0.13.5
# dalle-mini
flax
Expand Down