Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speeding up the process in lots of times #17

Open
Spaceginner opened this issue Dec 5, 2022 · 7 comments
Open

Speeding up the process in lots of times #17

Spaceginner opened this issue Dec 5, 2022 · 7 comments
Labels
enhancement New feature or request feature-request help wanted Extra attention is needed

Comments

@Spaceginner
Copy link

Issue type: feature request

Severity: medium

Describe the feature

So, it seems that you are using pytorch library for this project (based on a single viewed file, i am sorry if i am just dumb and didnt read something), which is fine. But, I have a question, why it doesnt use cuda? CPU is slow for such computation kind (lots of same easy computations), but a GPU is designed in my mind for such use cases.

Switching the device to cuda, can make a big (or significant) perfomance uplift. I was doing some stuff with pytorch (my projects were dumb tho) and trasnfering the model to another device is very easy, one method and one check:

DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
model = torch.load("model name").to(DEVICE)

(google it or search in pytorch docs, i am not sure about the method name for transfering models in another device)

btw, it would be better if used libs (such as pytorch) was installed on a pc in a created venv (you can create venvs via scripts, i guess). this will allow to bring quick pop up on a start, with a question "Do you have an NVidia GPU?", if yes it will install pytorch with cuda support, else pytorch without cuda. (yes, i am aware pytorch is about 2 gigs, at least with cuda support)

PC:

  • OS: Windows
  • Browser: Chrome
  • .exe file name: Spaceship.Generator.v1.0.3.exe

Additional context

how i came to such consern? my cuda is not used, when use stable diffusion or my own projects, they use cuda device

...also it would be useful to provide quick guide to know what gpu does user use, cuz sometimes i see people didnt knowing what gpu they are using.

@Spaceginner
Copy link
Author

also, venv folder can be made in %temp% or some cache folder

@gallorob gallorob added enhancement New feature or request feature-request help wanted Extra attention is needed labels Dec 6, 2022
@gallorob
Copy link
Collaborator

gallorob commented Dec 6, 2022

Hi @Spaceginner, the PyTorch library is supported but not used as default, as otherwise the executable would be a ~800MB file and would take even longer to start the application--this was implemented in arayabrain#35.

The PyTorch dependency is left for now for results reproducibility, as it was used in the earlier versions of the applications (before the user study release) in tests.

There is already some parallelisation going (using joblib.Parallel and joblib.delayed), but not all operations can be parallelised (or, at least, quite a bit of code refactorisation may be needed). I will leave this feature request open and add the 'help wanted' tag, as for now I don't think I'll manage to tackle it.

@gallorob gallorob changed the title [FEATURE REQUEST] Speeding up the process in lots of times Speeding up the process in lots of times Dec 6, 2022
@Spaceginner
Copy link
Author

Spaceginner commented Dec 6, 2022

So, I understood, CUDA or any GPU computational power is not being used, right? (just tons of threading?) Also, can I compile a program with torch with cuda support? (my cpu is just too bad for any kind of computations)

@gallorob
Copy link
Collaborator

gallorob commented Dec 6, 2022

Correct, no GPU/CUDA parallelisation, just multithreading.

As for your question, yeah, you can. If you want to play around with this program, you can edit the configs.ini file and set the use_torch flag to True. PyInstaller will pick up automatically the libraries used in the project when you build, so that should not be a problem.

PS: consider forking this repository if you want to play around with it, you can still create a PR later on if you're happy with the changes you make 😃

@Spaceginner
Copy link
Author

One more question, will it pick up torch libraries with cuda support or regular ones?

@gallorob
Copy link
Collaborator

gallorob commented Dec 6, 2022

If it's set up to use CUDA, it will use CUDA. If you're unsure, try and making a simple script such as

import torch

print(torch.cuda.is_available)

and creating an executable from it with

import PyInstaller.__main__ as pyinst

pyi_args = ['my_torch_script.py',
            '--clean',
            '--onefile',
            '--noconfirm',
            '--name', "my_torrch_executable"]

pyinst.run(pyi_args)

This will give you a very barebones test to see if you can make a Python executable with PyTorch and CUDA. You can then edit the first script to test tensor operations on a GPU device.

@Spaceginner
Copy link
Author

OK, thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature-request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants