Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

`You must select at least one handle point and target point. Should I read the paper first? #7

Closed
kanseaveg opened this issue May 21, 2023 · 14 comments

Comments

@kanseaveg
Copy link

File "gradio_app.py", line 83, in on_drag
raise gr.Error('You must select at least one handle point and target point.').

i drag my photo and it report this error.

Should i read the paper first?

@kanseaveg
Copy link
Author

image
still get error.

@Zeqiang-Lai
Copy link
Collaborator

It is not necessary. You have to click two points at least. Blue: handle point, Red: target point, like this.

image

Then click drag it, the model would drag blue point towards red point.

BTW: Do you have any suggestion on some better error message ? Does handle point confuse you ?

@kanseaveg
Copy link
Author

File "/home/amax/euan/code/draggan/drag_gan.py", line 173, in drag_gan
F0 = F.detach().clone()
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 23.69 GiB total capacity; 4.47 GiB already allocated; 150.94 MiB free; 4.99 GiB reserved in total by PyTorch)

I think maybe it's out of memory. lol~

how can i deploy it on my server. I have four 3090 gpus.

@Zeqiang-Lai
Copy link
Collaborator

It can be deployed with about 9 GB GPU memory.

@kanseaveg
Copy link
Author

image

still got error. lol.

i think it's out of memory now

Traceback (most recent call last):
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/gradio/routes.py", line 421, in run_predict
event_data=event_data,
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/gradio/blocks.py", line 1321, in process_api
fn_index, inputs, iterator, request, event_id, event_data
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/gradio/blocks.py", line 1064, in call_function
prediction = await utils.async_iteration(iterator)
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/gradio/utils.py", line 508, in anext
run_sync_iterator_async, self.iterator, limiter=self.limiter
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/anyio/to_thread.py", line 32, in run_sync
func, *args, cancellable=cancellable, limiter=limiter
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async
return next(iterator)
File "gradio_app.py", line 103, in on_drag
max_iters=max_iters):
File "/home/amax/euan/code/draggan/drag_gan.py", line 182, in drag_gan
sample2, F2 = g_ema.generate(latent, noise)
File "/home/amax/euan/code/draggan/drag_gan.py", line 107, in generate
out = conv1(out, latent[:, i], noise=noise1)
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/amax/euan/code/draggan/stylegan2/model.py", line 358, in forward
out = self.conv(input, style)
File "/home/amax/miniconda3/envs/cyy/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/amax/euan/code/draggan/stylegan2/model.py", line 275, in forward
input, weight, padding=0, stride=2, groups=batch
File "/home/amax/euan/code/draggan/stylegan2/op/conv2d_gradfix.py", line 64, in conv_transpose2d
).apply(input, weight, bias)
File "/home/amax/euan/code/draggan/stylegan2/op/conv2d_gradfix.py", line 146, in forward
**common_kwargs,
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 0; 23.69 GiB total capacity; 4.85 GiB already allocated; 120.94 MiB free; 5.02 GiB reserved in total by PyTorch)

@Zeqiang-Lai
Copy link
Collaborator

Could you show the output of nvidia-smi ?

I think it might be lack of enough memory of your server

@kanseaveg
Copy link
Author

if __name__ == '__main__':
    demo = main()
    demo = demo.queue(concurrency_count=1, max_size=20).launch(share=True, server_name='10.xx.xx.239', port=6666)

i just modify the last line to deploy it on my own server which has 4x3090 gpus.

But it got into touble.

nvidia-smi

(cyy) amax@admin:~/euan/code/draggan$ nvidia-smi
Sun May 21 20:33:53 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:18:00.0 Off | N/A |
| 39% 28C P8 19W / 350W | 17580MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... On | 00000000:3B:00.0 Off | N/A |
| 39% 29C P8 26W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce ... On | 00000000:86:00.0 Off | N/A |
| 42% 27C P8 23W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA GeForce ... On | 00000000:AF:00.0 Off | N/A |
| 30% 28C P8 15W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 4410 C python 8758MiB |
| 0 N/A N/A 18275 C python 8820MiB |
+-----------------------------------------------------------------------------+

@kanseaveg
Copy link
Author

I would like to ask, is this Gradio service running on a remote server or on a local PyTorch

@Zeqiang-Lai
Copy link
Collaborator

Ok, I see.

Try this to use GPU 1

export CUDA_VISIBLE_DEVICES=1
python gradio_app.py

@kanseaveg
Copy link
Author

kanseaveg commented May 21, 2023

I would like to ask, is this Gradio service running on a remote server or on a local PyTorch

Or to put it another way, Gradio service runs on the remote interface provided by Gradio, but the graphics card used is local

@Zeqiang-Lai
Copy link
Collaborator

Zeqiang-Lai commented May 21, 2023

Well, I guess you want a shareable link? If it is, once you have launch the service via python gradio_app.py. You will get a link similar as

Running on local URL:  http://127.0.0.1:7860
Running on public URL: https://bf5e8576f09a6582f7.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces

https://bf5e8576f09a6582f7.gradio.live can be accessed anywhere and use your local GPU

@kanseaveg
Copy link
Author

image

it works now.!!!!!

Thank you!!!

My dear friends.

@Zeqiang-Lai
Copy link
Collaborator

Cool, you are welcome

@kanseaveg
Copy link
Author

image
it 's now smile. LOL. Thank you my friend. I forget to switch my gpu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants