Install Git LFS before cloning if you want to download binaries and pre-trained models.
sudo apt install git-lfs
Start by cloning the repository into a directory of your choice. We recommend using GitHub repositories via SSH, instead of HTTPS. In case you have not yet generated an SSH-key and/or linked it to GitHub, please follow this short guide. The repository can then be clone using
# cd <your chosen project directory>
git clone git@github.com:ethz-asl/analog_gauge_reader.git
To maintain a consistent code style and catch some common types of mistakes, pre-commit
can be used to automatically format and lint the repository's code whenever a new commit is created. In case you are interested, a detailed guide and installation instructions are available here. The tool can be installed with
pip3 install pre-commit
You can then enable it for this project by calling
# cd <your chosen project directory>
pre-commit install
After the above command, pre-commit
will automatically check all changed files whenever you try to commit them. You can also run it on all of the repository's files manually at any time by calling pre-commit run --all-files
and add/customize the checks it performs by editing its config file .pre-commit-config.yaml
located in this repository's root directory.
On commit, some linting issues will be fixed automatically. To accept these changes, you need to git add
the corresponding files and do a git commit
again afterwards. However, some issues cannot be fixed automatically. These need to be fixed manually before being able to do a successful commit.
Install Poetry
curl -sSL https://install.python-poetry.org | python3 -
Install the project dependencies
poetry install
Enter Poetry shell
poetry shell
To setup the conda environment to run all scripts follow the following instruction:
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
~/miniconda3/bin/conda init zsh
conda create --name gauge_reader python=3.8 -y
conda activate gauge_reader
We use torch version 2.0.0.
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 -c pytorch -c nvidia
Refer to this page for installation https://mmocr.readthedocs.io/en/dev-1.x/get_started/install.html We use the version dev-1.x
pip install -U openmim
mim install mmengine==0.7.2
mim install mmcv==2.0.0
mim install mmdet==3.0.0
mim install mmocr==1.0.0
We use the following versions: mmocr 1.0.0, mmdet 3.0.0, mmcv 2.0.0, mmengine 0.7.2. If for some reason the installation fails refer to open-mmlab/mmcv#2938. We found that it is essential that we have Pytorch version 2.0.0
We use ultralytics version 8.0.66
pip install ultralytics
We use scikit-learn version 1.2.2
pip install -U scikit-learn
The pipeline script can be run with the following command:
python pipeline.py --detection_model path/to/detection_model --segmentation_model /path/to/segmentation_model --key_point_model path/to/key_point_model --base_path path/to/results --input path/to/test_image_folder/images --debug --eval
For the input you can either choose an entire folder of images or a single image. Both times the result will be saved to a new run folder created in the base_path
folder. For each image in the input folder a separate folder will be created.
In each such folder the reading is stored inside the result.json
file. If there is no such reading, one of the pipeline stages failed before a reading could be computed. Best check the log file which is saved inside the run folder, to see where the error came up. There will also be a error.json
file saved to the image folder, which computes some metrics to check without any labels how good our estimate is.
Additionally if the debug
flag is set then the plots of all pipeline stages will be added to this folder. If the eval
flag is set then there will also be a result_full.json
file created. This file contains the data of the individual stages of the pipeline, which is used when evaluating in the script full_evaluation.py
.
I prepared two scripts to automatically run the pipeline and evaluations on multiple folders with one command. This allows us to easily conduct experiments for images that we group by their characteristics in different folders.
If they want to be used, make sure to modify the paths inside the scripts, to match with your data.