Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add linux support #475

Closed
RaulKong898 opened this issue Jul 15, 2023 · 23 comments
Closed

Add linux support #475

RaulKong898 opened this issue Jul 15, 2023 · 23 comments

Comments

@RaulKong898
Copy link

Issue Type

Question

vc client version number

It doesn't have linux support

OS

Kinux

GPU

Linux

Clear setting

no

Sample model

no

Input chunk num

no

Wait for a while

The GUI successfully launched.

read tutorial

no

Extract files to a new folder.

no

Voice Changer type

Linux

Model type

Linux

Situation

I am writing to request that you add Linux support for our application. I have been a Linux user for many years and am a big fan of our application. However, I am disappointed that there is no Linux support.

I know that there are many Linux users who would use our application if it were available to them. Linux is a popular platform and the number of Linux users is growing every year.

Adding Linux support would be a great move for our business. It would allow us to reach a wider audience and grow our user base.

Thank you for your time.

Sincerely,

Raul Popescu

@w-okada
Copy link
Owner

w-okada commented Jul 17, 2023

I believe that Linux users can use the Anaconda environment.
https://github.com/w-okada/voice-changer/blob/master/README_dev_en.md

@Fractal-Tess
Copy link

Fractal-Tess commented Jul 19, 2023

I followed the installation guide, however, I used venv instead of conda.
Firstly I got this error

python3 MMVCServerSIO.py -p 18888 --https true \
    --content_vec_500 pretrain/checkpoint_best_legacy_500.pt  \
    --content_vec_500_onnx pretrain/content_vec_500.onnx \
    --content_vec_500_onnx_on true \
    --hubert_base pretrain/hubert_base.pt \
    --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \
    --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \
    --nsf_hifigan pretrain/nsf_hifigan/model \
    --crepe_onnx_full pretrain/crepe_onnx_full.onnx \
    --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \
    --rmvpe pretrain/rmvpe.pt \
    --model_dir model_dir \
    --samples samples.json
    Booting PHASE :__main__
    PYTHON:3.10.6 (main, Jan  5 2023, 12:29:14) [GCC 12.2.0]
    Voice Changerを起動しています。
++[Voice Changer] model_dir is already exists. skip download samples.
    Internal_Port:18888
    protocol: HTTPS(self-signed), key:keys/20230719_143246.key, cert:keys/20230719_143246.cert
    -- ---- -- 
    ブラウザで次のURLを開いてください.
    https://<IP>:<PORT>/
    多くの場合は次のいずれかのURLにアクセスすると起動します。
    https://localhost:18888/
    https://192.168.0.2:18888/
    Booting PHASE :__mp_main__
    サーバプロセスを起動しています。
    Booting PHASE :MMVCServerSIO
[MODEL SLOT INFO] [RVCModelSlot(voiceChangerType='RVC', name='つくよみちゃん(onnx)', description='', credit='つくよみちゃん', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/tsukuyomi-chan/terms_of_use.txt', iconFile='model_dir/0/tsukuyomi-chan.png', speakers={'0': 'target'}, modelFile='model_dir/0/tsukuyomi_v2_40k_e100_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='Tsukuyomi-chan_o'), RVCModelSlot(voiceChangerType='RVC', name='あみたろ(onnx)', description='', credit='あみたろ', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/amitaro/terms_of_use.txt', iconFile='model_dir/1/amitaro.png', speakers={'0': 'target'}, modelFile='model_dir/1/amitaro_v2_40k_e100_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='Amitaro_o'), RVCModelSlot(voiceChangerType='RVC', name='黄琴まひろ(onnx)', description='', credit='黄琴まひろ', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/kikoto_mahiro/terms_of_use.txt', iconFile='model_dir/2/kikoto_mahiro.png', speakers={'0': 'target'}, modelFile='model_dir/2/kikoto_mahiro_v2_40k_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='KikotoMahiro_o'), RVCModelSlot(voiceChangerType='RVC', name='刻鳴時雨(onnx)', description='', credit='刻鳴時雨', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/tokina_shigure/terms_of_use.txt', iconFile='model_dir/3/tokina_shigure.png', speakers={'0': 'target'}, modelFile='model_dir/3/tokina_shigure_v2_40k_e100_simple.onnx', indexFile='model_dir/3/added_IVF2736_Flat_nprobe_1_v2.index.bin', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='TokinaShigure_o'), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={})]
[Voice Changer] model slot is changed -1 -> 3
................RVC
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/server.py", line 59, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
    return future.result()
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/server.py", line 66, in serve
    config.load()
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/config.py", line 471, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/mnt/dev/thirdparty/voice-changer/server/MMVCServerSIO.py", line 119, in <module>
    voiceChangerManager = VoiceChangerManager.get_instance(voiceChangerParams)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 115, in get_instance
    cls._instance = cls(params)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 83, in __init__
    self.update_settings(key, val)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 262, in update_settings
    self.generateVoiceChanger(newVal)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 208, in generateVoiceChanger
    from voice_changer.RVC.RVC import RVC
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/RVC/RVC.py", line 25, in <module>
    from voice_changer.RVC.embedder.EmbedderManager import EmbedderManager
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/RVC/embedder/EmbedderManager.py", line 5, in <module>
    from voice_changer.RVC.embedder.FairseqContentvec import FairseqContentvec
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/RVC/embedder/FairseqContentvec.py", line 3, in <module>
    from voice_changer.RVC.embedder.FairseqHubert import FairseqHubert
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/RVC/embedder/FairseqHubert.py", line 4, in <module>
    from fairseq import checkpoint_utils
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fairseq/__init__.py", line 21, in <module>
    from fairseq.logging import meters, metrics, progress_bar  # noqa
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 407, in <module>
    from torch.utils.tensorboard import SummaryWriter
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/utils/tensorboard/__init__.py", line 12, in <module>
    from .writer import FileWriter, SummaryWriter  # noqa: F401
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py", line 9, in <module>
    from tensorboard.compat.proto.event_pb2 import SessionLog
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/tensorboard/compat/proto/event_pb2.py", line 17, in <module>
    from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in <module>
    from tensorboard.compat.proto import histogram_pb2 as tensorboard_dot_compat_dot_proto_dot_histogram__pb2
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/tensorboard/compat/proto/histogram_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/google/protobuf/descriptor.py", line 561, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
^CTraceback (most recent call last):
  File "<string>", line 1, in <module>

So i tried degrading the protobuf package to 3.19.0 and got this error

 python3 MMVCServerSIO.py -p 18888 --https true \
    --content_vec_500 pretrain/checkpoint_best_legacy_500.pt  \
    --content_vec_500_onnx pretrain/content_vec_500.onnx \
    --content_vec_500_onnx_on true \
    --hubert_base pretrain/hubert_base.pt \
    --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \
    --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \
    --nsf_hifigan pretrain/nsf_hifigan/model \
    --crepe_onnx_full pretrain/crepe_onnx_full.onnx \
    --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \
    --rmvpe pretrain/rmvpe.pt \
    --model_dir model_dir \
    --samples samples.json
    Booting PHASE :__main__
    PYTHON:3.10.6 (main, Jan  5 2023, 12:29:14) [GCC 12.2.0]
    Voice Changerを起動しています。
++[Voice Changer] model_dir is already exists. skip download samples.
    Internal_Port:18888
    protocol: HTTPS(self-signed), key:keys/20230719_143435.key, cert:keys/20230719_143435.cert
    -- ---- -- 
    ブラウザで次のURLを開いてください.
    https://<IP>:<PORT>/
    多くの場合は次のいずれかのURLにアクセスすると起動します。
    https://localhost:18888/
    https://192.168.0.2:18888/
    Booting PHASE :__mp_main__
    サーバプロセスを起動しています。
    Booting PHASE :MMVCServerSIO
[MODEL SLOT INFO] [RVCModelSlot(voiceChangerType='RVC', name='つくよみちゃん(onnx)', description='', credit='つくよみちゃん', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/tsukuyomi-chan/terms_of_use.txt', iconFile='model_dir/0/tsukuyomi-chan.png', speakers={'0': 'target'}, modelFile='model_dir/0/tsukuyomi_v2_40k_e100_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='Tsukuyomi-chan_o'), RVCModelSlot(voiceChangerType='RVC', name='あみたろ(onnx)', description='', credit='あみたろ', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/amitaro/terms_of_use.txt', iconFile='model_dir/1/amitaro.png', speakers={'0': 'target'}, modelFile='model_dir/1/amitaro_v2_40k_e100_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='Amitaro_o'), RVCModelSlot(voiceChangerType='RVC', name='黄琴まひろ(onnx)', description='', credit='黄琴まひろ', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/kikoto_mahiro/terms_of_use.txt', iconFile='model_dir/2/kikoto_mahiro.png', speakers={'0': 'target'}, modelFile='model_dir/2/kikoto_mahiro_v2_40k_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='KikotoMahiro_o'), RVCModelSlot(voiceChangerType='RVC', name='刻鳴時雨(onnx)', description='', credit='刻鳴時雨', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/tokina_shigure/terms_of_use.txt', iconFile='model_dir/3/tokina_shigure.png', speakers={'0': 'target'}, modelFile='model_dir/3/tokina_shigure_v2_40k_e100_simple.onnx', indexFile='model_dir/3/added_IVF2736_Flat_nprobe_1_v2.index.bin', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='TokinaShigure_o'), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={})]
[Voice Changer] model slot is changed -1 -> 3
................RVC
libtorch_cuda_cpp.so: cannot open shared object file: No such file or directory
WARNING:root:WARNING: libtorch_cuda_cpp.so: cannot open shared object file: No such file or directory
Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop
WARNING:root:Blocksparse is not available: the current GPU does not expose Tensor cores
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/server.py", line 59, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
    return future.result()
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/server.py", line 66, in serve
    config.load()
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/config.py", line 471, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/mnt/dev/thirdparty/voice-changer/server/MMVCServerSIO.py", line 119, in <module>
    voiceChangerManager = VoiceChangerManager.get_instance(voiceChangerParams)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 115, in get_instance
    cls._instance = cls(params)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 83, in __init__
    self.update_settings(key, val)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 262, in update_settings
    self.generateVoiceChanger(newVal)
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/VoiceChangerManager.py", line 208, in generateVoiceChanger
    from voice_changer.RVC.RVC import RVC
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/RVC/RVC.py", line 28, in <module>
    from voice_changer.RVC.onnxExporter.export2onnx import export2onnx
  File "/mnt/dev/thirdparty/voice-changer/server/voice_changer/RVC/onnxExporter/export2onnx.py", line 4, in <module>
    from onnxsim import simplify
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/onnxsim/__init__.py", line 1, in <module>
    from onnxsim.onnx_simplifier import simplify, main
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/onnxsim/onnx_simplifier.py", line 13, in <module>
    import onnx  # type: ignore
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/onnx/__init__.py", line 13, in <module>
    from onnx.external_data_helper import (
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/onnx/external_data_helper.py", line 11, in <module>
    from onnx.onnx_pb import AttributeProto, GraphProto, ModelProto, TensorProto
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/onnx/onnx_pb.py", line 4, in <module>
    from .onnx_ml_pb2 import *  # noqa
  File "/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/onnx/onnx_ml_pb2.py", line 5, in <module>
    from google.protobuf.internal import builder as _builder
ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/home/fractal-tess/.pyenv/versions/3.10.6/lib/python3.10/site-packages/google/protobuf/internal/__init__.py)

@w-okada
Copy link
Owner

w-okada commented Jul 20, 2023

how about Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python

@Fractal-Tess
Copy link

how about Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python

You mean as an environment variable?

@w-okada
Copy link
Owner

w-okada commented Jul 20, 2023

yes

error log said

If you cannot immediately regenerate your protos, some other possible workarounds are:

  1. Downgrade the protobuf package to 3.20.x or lower.
  2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

@KnownDimension
Copy link

ye i get that anaconda and docker is available, but its very hit or miss especially considering the video tutorial is made for wsl and the cloned binary does not give an amd version (linux users use amd alot)

i did try to get it running, but anaconda was a bust because pyworld wouldn't build the wheel when installing requirements.txt, and docker did load up a web Gui but that is really as far as i was able to get since other than settings changing audio wouldnt record and the list for voice models wasn't visible

@IDEDARY
Copy link

IDEDARY commented Jul 22, 2023

I am trying to set it up to work and it is pain. So far I have been manually installing python packages for last 15 minutes, bcs for some reason they are not included in requirements.txt. It just crashes on runtime that packages are missing so I need to install them by myself. After spending some time figuring out that module socketio is not socketio but python-socketio, I finally got it run.

@Fractal-Tess
Copy link

I got it working today with the PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python environment variable.

As far as setup is concerned, yeah lacking Linux support is sad, and although i was able to run the docker container, there were simular errors and i wasn't able to get to a working condition with it.

@IDEDARY
Copy link

IDEDARY commented Jul 22, 2023

I got it working today with the PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python environment variable.

As far as setup is concerned, yeah lacking Linux support is sad, and although i was able to run the docker container, there were simular errors and i wasn't able to get to a working condition with it.

How did you set the ENV variable? Im currently stuck in the webui. It failes to initialize with "no error message"
image
No crashlog, no console output, nothing.

@Fractal-Tess
Copy link

I got it working today with the PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python environment variable.
As far as setup is concerned, yeah lacking Linux support is sad, and although i was able to run the docker container, there were simular errors and i wasn't able to get to a working condition with it.

How did you set the ENV variable? Im currently stuck in the webui. It failes to initialize with "no error message" image No crashlog, no console output, nothing.

Humm... Not sure what the problem with your machine might be.
Tomorrow I'll try and compose a workable docker container and provide the image link.

@ZachAR3
Copy link

ZachAR3 commented Jul 23, 2023

I got it working today with the PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python environment variable.
As far as setup is concerned, yeah lacking Linux support is sad, and although i was able to run the docker container, there were simular errors and i wasn't able to get to a working condition with it.

How did you set the ENV variable? Im currently stuck in the webui. It failes to initialize with "no error message" image No crashlog, no console output, nothing.

I'm having the exact same issues, it downloaded all the models fine I have it in it's own conda environment and installed all the dependencies as asked but it get's stuck after completing boot with no errors and the website gives the same error as theirs. PC Specs:Arch linux kernel 6.4.3-zen, RTX 2080 TI + GTX 1060 6GB, Ryzen 5900x, 32GB 3600MHZ ram with Nvidia proprietary drivers. Log with command run to launch it:

python3 MMVCServerSIO.py -p 18888 --https true     --content_vec_500 pretrain/checkpoint_best_legacy_500.pt      --content_vec_500_onnx pretrain/content_vec_500.onnx     --content_vec_500_onnx_on true     --hubert_base pretrain/hubert_base.pt     --hubert_base_jp pretrain/rinna_hubert_base_jp.pt     --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt     --nsf_hifigan pretrain/nsf_hifigan/model     --crepe_onnx_full pretrain/crepe_onnx_full.onnx     --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx     --rmvpe pretrain/rmvpe.pt     --model_dir model_dir     --samples samples.json --trace-warnings
    Booting PHASE :__main__
    PYTHON:3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0]
    Voice Changerを起動しています。+++[Voice Changer] model_dir is already exists. skip download samples.
    Internal_Port:18888
    protocol: HTTPS(self-signed), key:keys/20230723_054001.key, cert:keys/20230723_054001.cert
    -- ---- -- 
    ブラウザで次のURLを開いてください.
    https://<IP>:<PORT>/
    多くの場合は次のいずれかのURLにアクセスすると起動します。    https://localhost:18888/
    https://172.16.1.153:18888/
    Booting PHASE :__mp_main__
    サーバプロセスを起動しています。    Booting PHASE :MMVCServerSIO
[MODEL SLOT INFO] [RVCModelSlot(voiceChangerType='RVC', name='つくよみちゃん(onnx)', description='', credit='つくよみちゃん', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/tsukuyomi-chan/terms_of_use.txt', iconFile='model_dir/0/tsukuyomi-chan.png', speakers={'0': 'target'}, modelFile='model_dir/0/tsukuyomi_v2_40k_e100_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='Tsukuyomi-chan_o'), RVCModelSlot(voiceChangerType='RVC', name='あみたろ(onnx)', description='', credit='あみたろ', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/amitaro/terms_of_use.txt', iconFile='model_dir/1/amitaro.png', speakers={'0': 'target'}, modelFile='model_dir/1/amitaro_v2_40k_e100_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='Amitaro_o'), RVCModelSlot(voiceChangerType='RVC', name='黄琴まひろ(onnx)', description='', credit='黄琴まひろ', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/kikoto_mahiro/terms_of_use.txt', iconFile='model_dir/2/kikoto_mahiro.png', speakers={'0': 'target'}, modelFile='model_dir/2/kikoto_mahiro_v2_40k_simple.onnx', indexFile='', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='KikotoMahiro_o'), RVCModelSlot(voiceChangerType='RVC', name='刻鳴時雨(onnx)', description='', credit='刻鳴時雨', termsOfUseUrl='https://huggingface.co/wok000/vcclient_model/raw/main/rvc_v2_alpha/tokina_shigure/terms_of_use.txt', iconFile='model_dir/3/tokina_shigure.png', speakers={'0': 'target'}, modelFile='model_dir/3/tokina_shigure_v2_40k_e100_simple.onnx', indexFile='model_dir/3/added_IVF2736_Flat_nprobe_1_v2.index.bin', defaultTune=0, defaultIndexRatio=0, defaultProtect=0.5, isONNX=True, modelType='onnxRVC', samplingRate=40000, f0=True, embChannels=768, embOutputLayer=12, useFinalProj=False, deprecated=False, embedder='hubert_base', sampleId='TokinaShigure_o'), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={}), ModelSlot(voiceChangerType=None, name='', description='', credit='', termsOfUseUrl='', iconFile='', speakers={})]

@PixelSam123
Copy link

ye i get that anaconda and docker is available, but its very hit or miss especially considering the video tutorial is made for wsl and the cloned binary does not give an amd version (linux users use amd alot)

i did try to get it running, but anaconda was a bust because pyworld wouldn't build the wheel when installing requirements.txt, and docker did load up a web Gui but that is really as far as i was able to get since other than settings changing audio wouldnt record and the list for voice models wasn't visible

Simply bump pyworld to version 0.3.4 in the requirements file. It works that way. I didn't have to set any protobuf environment variables. But server mode on Linux felt really slow though when I tried it, this might be a setup issue on my end though.

@ZachAR3
Copy link

ZachAR3 commented Jul 23, 2023

ye i get that anaconda and docker is available, but its very hit or miss especially considering the video tutorial is made for wsl and the cloned binary does not give an amd version (linux users use amd alot)
i did try to get it running, but anaconda was a bust because pyworld wouldn't build the wheel when installing requirements.txt, and docker did load up a web Gui but that is really as far as i was able to get since other than settings changing audio wouldnt record and the list for voice models wasn't visible

Simply bump pyworld to version 0.3.4 in the requirements file. It works that way. I didn't have to set any protobuf environment variables. But server mode on Linux felt really slow though when I tried it, this might be a setup issue on my end though.

Same issue even after updating pyworld

@ZachAR3
Copy link

ZachAR3 commented Jul 27, 2023

I tried commenting out mp.freeze_support() as well as setting https to false as suggested in #377 with no luck on Conda, what's strange is before the error page I see the actual webui load for a split second then switch. I also tried running it with PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python to no avail.

@ZachAR3
Copy link

ZachAR3 commented Jul 29, 2023

After running git pull to update it I am still experiencing the issue but this time with an error message on the site!
The top error:

Error
unhandledrejection
no error stack

The bottom error

TypeError: n.setSinkId is not a function
yV/</e/</<@http://localhost:18888/index.js:2:3052131
yV/</e/<@http://localhost:18888/index.js:2:3052001
h@http://localhost:18888/index.js:2:1380451
61/r/z/<@http://localhost:18888/index.js:2:1381807
61/r/w/</<@http://localhost:18888/index.js:2:1380814
e@http://localhost:18888/index.js:2:1387744
o@http://localhost:18888/index.js:2:1387948

@ZachAR3
Copy link

ZachAR3 commented Jul 29, 2023

After switching browsers as seen in #587 to a chrome based one the voice changer works! I'm unsure if it was due to a repo update in Arch or something Okada did, but it seems to be working for me using Anaconda on arch now. For anyone else experiencing this issue Arch steps (for conda atleast):pyworld==0.3.4 and protobufs==3.20.0 or errors will come up. Latest master pull and use a chromium based browsers

@Suchys22
Copy link

I followed that guide but still idk where to launch it, build version (like windows and mac os) would be awesome

@ZachAR3
Copy link

ZachAR3 commented Jul 30, 2023

I followed that guide but still idk where to launch it, build version (like windows and mac os) would be awesome

https://youtu.be/CL_b6T0TUic

@ZachAR3
Copy link

ZachAR3 commented Aug 1, 2023

@w-okada since no one else is commenting and the issue doesn't seem to appear anymore do you want to close this issue?

@w-okada
Copy link
Owner

w-okada commented Aug 2, 2023

Yes, I'll close this issue but I'm truly, truly grateful for everyone's discussion. And Linux is the best.

@w-okada w-okada closed this as completed Aug 2, 2023
@theoparis
Copy link

theoparis commented Aug 4, 2023

For anyone that is trying to install this on linux with pyenv, I ran into an issue related to pyworld which is solved by using --no-build-isolation.

This is how I was able to set it up with the fish shell:

pyenv install 3.10
pyenv global 3.10
python3 -m venv venv
source ./venv/bin/activate.fish
python3 -m pip install -U pip numpy wheel
cd server
python3 -m pip install -r requirements.txt --no-build-isolation

@w-okada w-okada mentioned this issue Aug 23, 2023
@LuisArtDavila
Copy link

Hello. I'm using Anaconda. It is running without any errors, but I cannot select my GPU. I am using an AMD 6700 XT and I am hoping to make use of the ROCm support on Linux. Any ideas? I can only see my CPU. Additionally, I get the following error after selecting a model:

[Voice Changer] post_update_settings ex: No module named 'fairseq'
Traceback (most recent call last):
  File "/home/basil/voice-changer/server/restapi/MMVC_Rest_Fileuploader.py", line 71, in post_update_settings
    info = self.voiceChangerManager.update_settings(key, val)
  File "/home/basil/voice-changer/server/voice_changer/VoiceChangerManager.py", line 311, in update_settings
    self.generateVoiceChanger(newVal)
  File "/home/basil/voice-changer/server/voice_changer/VoiceChangerManager.py", line 243, in generateVoiceChanger
    from voice_changer.RVC.RVCr2 import RVCr2
  File "/home/basil/voice-changer/server/voice_changer/RVC/RVCr2.py", line 11, in <module>
    from voice_changer.RVC.embedder.EmbedderManager import EmbedderManager
  File "/home/basil/voice-changer/server/voice_changer/RVC/embedder/EmbedderManager.py", line 5, in <module>
    from voice_changer.RVC.embedder.FairseqContentvec import FairseqContentvec
  File "/home/basil/voice-changer/server/voice_changer/RVC/embedder/FairseqContentvec.py", line 3, in <module>
    from voice_changer.RVC.embedder.FairseqHubert import FairseqHubert
  File "/home/basil/voice-changer/server/voice_changer/RVC/embedder/FairseqHubert.py", line 4, in <module>
    from fairseq import checkpoint_utils
ModuleNotFoundError: No module named 'fairseq'

I am on Arch Linux.

@LuisArtDavila
Copy link

So after installing fairseq and pyworld using pip, I am able to use the voice changer client without issue except that it still doesn't show my GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants