Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge installation using flatpak #82

Closed
lamyergeier opened this issue Dec 29, 2023 · 14 comments
Closed

Huge installation using flatpak #82

lamyergeier opened this issue Dec 29, 2023 · 14 comments

Comments

@lamyergeier
Copy link

lamyergeier commented Dec 29, 2023

Installed using flatpak. It downloaded about 3 GB of data! Will each version update would download 3 GB every time? Also, is so much of data entirely necessary, for example I don't have any GPU except the in-built GPU (Intel), I am not sure why the software download is so huge? This does not include the language models that needs to be downloaded separately.

@mkiol
Copy link
Owner

mkiol commented Dec 29, 2023

Thanks for reporting.

Indeed, the size of package is ridiculously huge.

To address this problem, next Flatpak package (v4.4.0) will be split into 3 parts:

  1. base package with all features but without GPU libraries (pkg size 600 MB, unpacked size 2.9 GB)
  2. add-on package providing only NVIDIA CUDA libraries (pkg size 600 MB , unpacked size 6.8 GB)
  3. add-on package with AMD ROCm libraries (pkg size 650 MB, unpacked size 11 GB)

So the plan is to significantly reduce the size of base package.

If it is still too big, you might try "tiny" package. You can download it from Releases page and install manually. Comparison of "regular" and "tiny" package is here.

@lamyergeier
Copy link
Author

base package with all features but without GPU libraries (pkg size 600 MB, unpacked size 2.9 GB)

May be the base package should be the tiny as 600 MB still is a lot.

To address this problem, next Flatpak package (v4.4.0) will be split into 3 parts

Please update (in the next release) that table in the readme with those 3 parts and tiny

@mkiol
Copy link
Owner

mkiol commented Dec 30, 2023

May be the base package should be the tiny as 600 MB still is a lot.

This is something worth to consider. Initially, I did not want to create too many add-ons so as not to confuse the user.

Please update (in the next release) that table in the readme with those 3 parts and tiny

Sure, I will do that.

@lamyergeier
Copy link
Author

lamyergeier commented Dec 31, 2023 via email

@mkiol
Copy link
Owner

mkiol commented Jan 26, 2024

"Modular" version has been released 🎉

It looks like this (section from README):

Starting from v4.4.0, the app distributed via Flatpak (published on Flathub) consists of the following packages:

  • Base package "Speech Note" (net.mkiol.SpeechNote)
  • Add-on for AMD graphics card "Speech Note AMD" (net.mkiol.SpeechNote.Addon.amd)
  • Add-on for NVIDIA graphics card "Speech Note NVIDIA" (net.mkiol.SpeechNote.Addon.nvidia)

Comparison between Base, Tiny and Add-ons Flatpak packages:

Sizes Base Tiny AMD add-on NVIDIA add-on
Download size 0.9 GiB 70 MiB +2.1 GiB +3.8 GiB
Unpacked size 2.9 GiB 170 MiB +11.5 GiB +6.9 GiB
Features Base Tiny AMD add-on NVIDIA add-on
Coqui/DeepSpeech STT + +
Vosk STT + +
Whisper (whisper.cpp) STT + +
Whisper (whisper.cpp) STT AMD GPU - - +
Whisper (whisper.cpp) STT NVIDIA GPU - - +
Faster Whisper STT + -
Faster Whisper STT NVIDIA GPU - - +
April-ASR STT + +
eSpeak TTS + +
MBROLA TTS + +
Piper TTS + +
RHVoice TTS + +
Coqui TTS + -
Coqui TTS AMD GPU - - +
Coqui TTS NVIDIA GPU - - +
Mimic3 TTS + -
Punctuation restoration + -
Translator + +

@mkiol mkiol closed this as completed Jan 26, 2024
@lamyergeier
Copy link
Author

lamyergeier commented Jan 26, 2024

  1. Does it make use of Intel integrated graphics if neither Nvidia or AMD graphics card present in the laptop?
  2. Does every update of base package, would download the entire flatpak package or does it support delta update?

@mkiol
Copy link
Owner

mkiol commented Jan 26, 2024

Does it make use of Intel integrated graphics if neither Nvidia or AMD graphics card present in the laptop?

Unfortunately no. Right now GPU acceleration is only for selected models and only NVIDIA and AMD.

In theory, Intel GPU is supported in whisper.cpp ("Whisper" models in Speech Note), but I didn't figure out how to make it work together with Flatpak package. Other STT models (e.g. "Faster Whisper") and all TTS models don't support Intel GPU acceleration at all.

Does every update of base package, would download the entire flatpak package

Add-ons are separate packages, so any update to the "Base" will not require an update to the Add-ons.

@lamyergeier
Copy link
Author

Add-ons are separate packages, so any update to the "Base" will not require an update to the Add-ons.

I meant that the base package is around 900 MB, does the next update of the base package would download the entire base package again or just the delta of the change?

@mkiol
Copy link
Owner

mkiol commented Jan 26, 2024

would download the entire base package again

I think the entire package has to be downloaded after updating. I might be mistaken, but Flatpak/Flathub does not support "delta" updates or anything similar.

@lamyergeier
Copy link
Author

May I suggest to offer the Tiny package as a flatpak and let user install the additional features TTS and STT from separate flatpak package(s). So when update happens to one flatpak package, the user just download that part.

@mkiol
Copy link
Owner

mkiol commented Jan 26, 2024

May I suggest to offer the Tiny package as a flatpak and let user install the additional features TTS and STT from separate flatpak package(s).

Good suggestion.

The main problem is in Python dependencies. One small pip package needs a lot of other stuff. Moreover, there is no simply way to split them because in "python world" everything depends on everything. My pragmatic approach was "Tiny". "Tiny" is de facto "Base" without everything that depends on Python. This was the easiest way to downsize.

I'm thinking about moving all Python dependencies to new add-on, so "Tiny" would become "Base". This might work 🤔 On the other hand, I don't want to make too many add-ons to not confuse users too much...

@lamyergeier
Copy link
Author

May be its possible to install python specific modules using pipx, and use it with tiny?

@mkiol
Copy link
Owner

mkiol commented Jan 27, 2024

and use it with tiny?

This might work but... To use anything outside Flatpak sandbox, you (as an advanced user) need to override permissions and grand access to specific directory where these manually installed Python libraries are located.

I will think about it :)

@lamyergeier
Copy link
Author

lamyergeier commented Jul 2, 2024

I will think about it :)

@mkiol If you have tried this, would be useful for beginners if you could provide the installation steps with pipx and tiny (flatpak), may be flatseal can be used for permission management.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants