Skip to content

Commit

Permalink
Merge branch 'dev'
Browse files Browse the repository at this point in the history
  • Loading branch information
banderlog committed Jul 10, 2021
2 parents 5c8c5ff + 169b81e commit 3e7d668
Show file tree
Hide file tree
Showing 15 changed files with 21 additions and 68 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ For additional info read `cv2.getBuildInformation()` output.

You will need ~7GB RAM and ~10GB disk space

I am using Ubuntu 18.04 [multipass](https://multipass.run/) instance: `multipass launch -c 6 -d 10G -m 7G 18.04`.
I am using Ubuntu 18.04 (python 3.6) [multipass](https://multipass.run/) instance: `multipass launch -c 6 -d 10G -m 7G 18.04`.

### Requirements

Expand Down Expand Up @@ -104,6 +104,7 @@ sudo ln -s /usr/bin/python3 /usr/bin/python
```bash
git clone https://github.com/banderlog/opencv-python-inference-engine
cd opencv-python-inference-engine
# git checkout dev
./download_all_stuff.sh
```

Expand Down
2 changes: 1 addition & 1 deletion build/opencv/opencv_setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D FFMPEG_INCLUDE_DIRS=$FFMPEG_PATH/include \
-D INF_ENGINE_INCLUDE_DIRS=$ABS_PORTION/dldt/inference-engine/include \
-D INF_ENGINE_LIB_DIRS=$ABS_PORTION/dldt/bin/intel64/Release/lib \
-D INF_ENGINE_RELEASE=2021030000 \
-D INF_ENGINE_RELEASE=2021040000 \
-D INSTALL_CREATE_DISTRIB=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
Expand Down
2 changes: 1 addition & 1 deletion create_wheel/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def __len__(self):

setuptools.setup(
name='opencv-python-inference-engine',
version='2021.04.13',
version='2021.07.10',
url="https://github.com/banderlog/opencv-python-inference-engine",
maintainer="Kabakov Borys",
license='MIT, Apache 2.0',
Expand Down
2 changes: 1 addition & 1 deletion dldt
Submodule dldt updated 10007 files
4 changes: 2 additions & 2 deletions download_all_stuff.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ if test $(lsb_release -rs) != 18.04; then
fi

green "RESET GIT SUBMODULES"
# use `git fetch --unshallow && git checkout tags/<tag>` for update
# git checkout dev
# use `git fetch --tags && git checkout tags/<tag>` for update
git submodule update --init --recursive --depth=1 --jobs=4
# restore changes command will differ between GIT versions (e.g., `restore`)
git submodule foreach --recursive git checkout .
Expand All @@ -34,7 +35,6 @@ green "CLEAN BUILD DIRS"
find build/dldt/ -mindepth 1 -not -name 'dldt_setup.sh' -not -name '*.patch' -delete
find build/opencv/ -mindepth 1 -not -name 'opencv_setup.sh' -delete
find build/ffmpeg/ -mindepth 1 -not -name 'ffmpeg_*.sh' -delete
find build/openblas/ -mindepth 1 -not -name 'openblas_setup.sh' -delete

green "CLEAN WHEEL DIR"
find create_wheel/cv2/ -type f -not -name '__init__.py' -delete
Expand Down
2 changes: 1 addition & 1 deletion ffmpeg
Submodule ffmpeg updated 1963 files
2 changes: 1 addition & 1 deletion opencv
Submodule opencv updated 434 files
1 change: 0 additions & 1 deletion tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ cd tests

Something like below. The general idea is to test only inference speed, without preprocessing and decoding.
Also, 1st inference must not count, because it will load all stuff into memory.
I prefer to do such things in `ipython` or `jupyter` with `%timeit`.

**NB:** be strict about Backend and Target

Expand Down
18 changes: 9 additions & 9 deletions tests/examples.ipynb

Large diffs are not rendered by default.

Binary file modified tests/helloworld.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 1 addition & 21 deletions tests/prepare_and_run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -68,30 +68,10 @@ for i in "${models[@]}"; do
wget "${url_start}/${i%.*}/FP32/${i}"
else
# checksum
sha256sum -c "${i}.sha256sum"
fi
done


# for speed test
# {filename: file_google_drive_id}
declare -A se_net=(["se_net.bin"]="1vbonFjVyleGRSd_wR-Khc1htsZybiHCG"
["se_net.xml"]="1Bz3EQwnes_iZ14iKAV6H__JZ2lynLmQz")

# for each key
for i in "${!se_net[@]}"; do
# if file exist
if [ -f $i ]; then
# checksum
sha256sum -c "${i}.sha256sum"
else
# get fileid from associative array and download file
wget --no-check-certificate "https://docs.google.com/uc?export=download&id=${se_net[$i]}" -O $i
sha256sum -c "${i}.sha256sum" || red "PROBLEMS ^^^"
fi
done

green "For \"$WHEEL\""
green "RUN TESTS with ./venv_t/bin/python ./tests.py"
./venv_t/bin/python ./tests.py
green "RUN TESTS with ./venv_t/bin/python ./speed_test.py"
./venv_t/bin/python ./speed_test.py
1 change: 0 additions & 1 deletion tests/se_net.bin.sha256sum

This file was deleted.

1 change: 0 additions & 1 deletion tests/se_net.xml.sha256sum

This file was deleted.

25 changes: 0 additions & 25 deletions tests/speed_test.py

This file was deleted.

4 changes: 2 additions & 2 deletions tests/text_recognition.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def _get_confidences(self, img: np.ndarray, box: tuple) -> np.ndarray:
return outs

def do_ocr(self, img: np.ndarray, bboxes: List[tuple]) -> List[str]:
""" Run OCR pipeline for a single words
""" Run OCR pipeline with greedy decoder for each single word (bbox)
:param img: BGR image
:param bboxes: list of sepaate word bboxes (ymin ,xmin ,ymax, xmax)
Expand All @@ -60,7 +60,7 @@ def do_ocr(self, img: np.ndarray, bboxes: List[tuple]) -> List[str]:
for box in bboxes:
# confidence distribution across symbols
confs = self._get_confidences(img, box)
# get maximal confidence for the whole beam width
# get maximal confidence for the whole beam width aka greedy decoder
idxs = confs[:, 0, :].argmax(axis=1)
# drop blank characters '#' with id == 36 in charvec
# isupposedly we taking only separate words as input
Expand Down

0 comments on commit 3e7d668

Please sign in to comment.