Open Innovation's project IA prediction to predict actors by using his voice and training our model
pip3 install -requirements.txt
!pip3 install pulsar-client
Create a python virtual environment and a repository ./venv :
python3 -m venv --system-site-packages ./venv
Activate this virutal env :
source ./venv/bin/activate # sh, bash, or zsh
. ./venv/bin/activate.fish # fish
source ./venv/bin/activate.csh # csh or tcsh
When the venv is active, the CLI prefix shall be "venv"
Install packages and dependancies in the virtual env, without modifying host system's configuration :
pip install --upgrade pip
pip list # show packages installed within the virtual environment
In order to exit the venv :
deactivate # don't exit until you're done using TensorFlow
Install tensorflow :
pip3 install tensorflow_io
Adapt the dockerfile.dist file as your own dockerfile
Build dockerfile :
docker build --rm -f Dockerfile -t your_container_docker_name .
Executing docker container interactively :
docker exec -it [container_name] bash
And then, execute pyscript like on an usual computer
In order to exit the container :
exit
If you'd modified the script or any files and don't want to rebuild, because of the long processing, do :
docker cp [path_to_your_file] [container_id]:/[path_to_cp_your_file]
WARNING Soundfile dependancie will not install libsndfile, due to linux system. So do before executing pyscript:
apt-get install libsndfile1
And press Y when asking
Tutorial link : https://www.scaleway.com/en/docs/object-storage-with-aws-cli/
If you're using a S3 bucket, you need to set-up an aws config (or whatever else). This is done by editing the config file. Basically, the aws cli is installed in the requirements.txt.
In order to edit the config file, you need to
- create the file
aws configure set plugins.endpoint awscli_plugin_endpoint
- access where file is
cd ~/.aws/
vim config
- Edit it like this :
[plugins]
endpoint = awscli_plugin_endpoint
[default]
region = nl-ams
s3 =
endpoint_url = https://s3.nl-ams.scw.cloud
signature_version = s3v4
max_concurrent_requests = 100
max_queue_size = 1000
multipart_threshold = 50MB
# Edit the multipart_chunksize value according to the file sizes that you want to upload. The present configuration allows to upload files up to 10 GB (100 requests * 10MB). For example setting it to 5GB allows you to upload files up to 5TB.
multipart_chunksize = 10MB
s3api =
endpoint_url = https://s3.nl-ams.scw.cloud
Precise your own region and url endpoint of yours
After this, precise your credentials in the credentials files
- create the file
aws configure
- Edit like this
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
https://www.scaleway.com/en/docs/object-storage-with-s3fs/
Install s3fs
apt -y install automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config
👤 Yann Durand
- Website: https://codewithnefaden.wordpress.com/
- Twitter: @YannDurand11
- Github: @Nefaden
👤 Eddy Cheval
- Github: @EddyCheval
👤 Alban Guillet
- Github: @AlbanGuillet
👤 Simon Huet
- Github: @SimonHuet
👤 Alexandre Rabreau
- Github: @AlexandreRab
Give a ⭐️ if this project helped you!
This README was generated with ❤️ by readme-md-generator