Flood prediction using LSTM and Deep Learning approaches.
Create a python virtual environment to install all the necessary packages:
python -m venv .venv
or
conda create -n flood_proj python=3.10.12
Install the required packages into the created virtual environment
pip install -r requirements.txt
Create .env
file in /ML
and add WANDB_ENTITY
entry.
To prepare the dataset see the dataset README file.
For training models see the ML README file.
Transfer data from the host machine to the remote (Note: run command in the host's terminal!)
rsync -azP /local/path/to/source/file user_name@server_ip:/remote/path/to/destination
Example:
rsync -azP /Users/abzal/Desktop/issai-srp/php03V9iD.png abzal_nurgazy@10.10.25.13:/raid/abzal_nurgazy/flood-prediction
Make sure that only YOU can read and write your ssh file, otherwise you will get the follwing error while using rsync:
Permissions 0777 for '/Users/username/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
To make your ssh keys read-writable only by you, use this:
chmod 600 ~/.ssh/id_rsa
List available gpu index and its unique id
nvidia-smi --query-gpu=index,uuid --format=csv
To run the Docker container, use the following command pattern (Note: run using tmux!):
tmux new -s session_name
docker run --name container_name --gpus '"device=GPU-id"' --rm -v /local/path:/container/path --workdir /container/path image_name command
Example:
docker run --name test_run1 --gpus '"device=GPU-a6535fb0-896f-edf3-632a-c44f49ad8600"' --rm -v /raid/abzal_nurgazy/flood-prediction:/workspace \
--workdir /workspace flood-prediction python3 test_run.py
To see running processes in tmux. Use CTRL+B D to detach from the current session
tmux list-sessions
tmux attach-session -t session_name