Skip to content

Commit

Permalink
Merge pull request #213 from opendr-eu/merge-master-into-develop
Browse files Browse the repository at this point in the history
Merge `master` branch into `develop` branch
  • Loading branch information
passalis committed Feb 4, 2022
2 parents ddfa801 + a5c65a9 commit a3c31c0
Show file tree
Hide file tree
Showing 100 changed files with 18,171 additions and 7 deletions.
72 changes: 72 additions & 0 deletions .github/workflows/publisher.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
name: Publisher

# Trigger on new github release, a tag with format vX.Y.Z is expected (used to tag the docker)
on:
release:
types: [published]

env:
OPENDR_VERSION: ${{ github.event.release.tag_name }}

defaults:
run:
shell: bash

jobs:
publish-wheel:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install prerequisites
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build Wheel
run: |
./bin/build_wheel.sh
- name: Upload Wheel
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run : |
twine upload dist/*
publish-docker-cpu:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Build Docker Image
run: docker build --tag opendr-toolkit:cpu_$OPENDR_VERSION --file Dockerfile .
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Publish Image
run: |
docker tag opendr-toolkit:cpu_$OPENDR_VERSION opendr/opendr-toolkit:cpu_$OPENDR_VERSION
docker push opendr/opendr-toolkit:cpu_$OPENDR_VERSION
publish-docker-cuda:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Build Docker Image
run: docker build --tag opendr-toolkit:cuda_$OPENDR_VERSION --file Dockerfile-cuda .
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Publish Image
run: |
docker tag opendr-toolkit:cuda_$OPENDR_VERSION opendr/opendr-toolkit:cuda_$OPENDR_VERSION
docker push opendr/opendr-toolkit:cuda_$OPENDR_VERSION
3 changes: 1 addition & 2 deletions .github/workflows/test_packages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,6 @@ jobs:
source venv/bin/activate
wget https://raw.githubusercontent.com/opendr-eu/opendr/master/dependencies/pip_requirements.txt
cat pip_requirements.txt | xargs -n 1 -L 1 pip install
# Test new package
pip install opendr-toolkit
python -m unittest discover -s tests/sources/tools/${{ matrix.package }}
test-docker:
Expand Down Expand Up @@ -89,7 +88,7 @@ jobs:
- control/mobile_manipulation
- simulation/human_model_generation
- control/single_demo_grasp
#- perception/object_tracking_3d
# - perception/object_tracking_3d
runs-on: ${{ matrix.os }}
steps:
- name: Set up Python 3.8
Expand Down
134 changes: 132 additions & 2 deletions .github/workflows/tests_suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ defaults:

jobs:
cleanup-runs:
if: ${{ contains(github.event.pull_request.labels.*.name, 'test sources') || contains(github.event.pull_request.labels.*.name, 'test tools') || github.event_name == 'schedule' }}
if: ${{ contains(github.event.pull_request.labels.*.name, 'test sources') || contains(github.event.pull_request.labels.*.name, 'test tools') || contains(github.event.pull_request.labels.*.name, 'test release') || github.event_name == 'schedule' }}
runs-on: ubuntu-latest
steps:
- uses: rokroskar/workflow-run-cleanup-action@master
Expand Down Expand Up @@ -106,4 +106,134 @@ jobs:
source tests/sources/tools/control/mobile_manipulation/run_ros.sh
python -m unittest discover -s tests/sources/tools/${{ matrix.package }}
fi
build-wheel:
needs: cleanup-runs
if: ${{ contains(github.event.pull_request.labels.*.name, 'test release') || github.event_name == 'schedule' }}
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install prerequisites
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build Wheel
run:
./bin/build_wheel.sh
- name: Upload wheel as artifact
uses: actions/upload-artifact@v2
with:
path:
dist/*.tar.gz
build-docker:
needs: cleanup-runs
if: ${{ contains(github.event.pull_request.labels.*.name, 'test release') || github.event_name == 'schedule' }}
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Build image
run: |
docker build --tag opendr/opendr-toolkit:cpu_test --file Dockerfile .
docker save opendr/opendr-toolkit:cpu_test > cpu_test.zip
- name: Upload image artifact
uses: actions/upload-artifact@v2
with:
path:
cpu_test.zip
test-wheel:
needs: build-wheel
if: ${{ contains(github.event.pull_request.labels.*.name, 'test release') || github.event_name == 'schedule' }}
strategy:
matrix:
os: [ubuntu-20.04]
package:
- engine
- utils
- perception/activity_recognition
- perception/compressive_learning
- perception/face_recognition
- perception/heart_anomaly_detection
- perception/multimodal_human_centric
- perception/object_tracking_2d
- perception/pose_estimation
- perception/speech_recognition
- perception/skeleton_based_action_recognition
- perception/semantic_segmentation
- perception/object_detection_2d
- perception/facial_expression_recognition
# - perception/object_detection_3d
# - control/mobile_manipulation
# - simulation/human_model_generation
# - control/single_demo_grasp
# - perception/object_tracking_3d
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Download artifact
uses: actions/download-artifact@v2
with:
path: artifact
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@v5.1
- name: Test Wheel
run: |
export DISABLE_BCOLZ_AVX2=true
sudo apt -y install python3.8-venv libfreetype6-dev git build-essential cmake python3-dev wget libopenblas-dev libsndfile1 libboost-dev python3-dev
python3 -m venv venv
source venv/bin/activate
wget https://raw.githubusercontent.com/opendr-eu/opendr/${{ steps.branch-name.outputs.current_branch }}/dependencies/pip_requirements.txt
cat pip_requirements.txt | xargs -n 1 -L 1 pip install
pip install ./artifact/artifact/*.tar.gz
python -m unittest discover -s tests/sources/tools/${{ matrix.package }}
test-docker:
needs: build-docker
if: ${{ contains(github.event.pull_request.labels.*.name, 'test release') || github.event_name == 'schedule' }}
strategy:
matrix:
os: [ubuntu-20.04]
package:
- engine
- utils
- perception/activity_recognition
- perception/compressive_learning
- perception/face_recognition
- perception/heart_anomaly_detection
- perception/multimodal_human_centric
- perception/object_tracking_2d
- perception/pose_estimation
- perception/speech_recognition
- perception/skeleton_based_action_recognition
- perception/semantic_segmentation
- perception/object_detection_2d
- perception/facial_expression_recognition
- perception/object_detection_3d
- control/mobile_manipulation
- simulation/human_model_generation
- control/single_demo_grasp
# - perception/object_tracking_3d
runs-on: ubuntu-20.04
steps:
- name: Download artifact
uses: actions/download-artifact@v2
with:
path: artifact
- name: Test docker
run: |
docker load < ./artifact/artifact/cpu_test.zip
docker run --name toolkit -i opendr/opendr-toolkit:cpu_test bash
docker start toolkit
docker exec -i toolkit bash -c "source bin/activate.sh && source tests/sources/tools/control/mobile_manipulation/run_ros.sh && python -m unittest discover -s tests/sources/tools/${{ matrix.package }}"
7 changes: 5 additions & 2 deletions docs/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,14 @@ Neither the copyright holder nor any applicable licensor will be liable for any
- [single_demo_grasp Module](single-demonstration-grasping.md)

- `simulation` Module
- [human_model_generation Module](human_model_generation.md)
- `data_generation` Module
- [synthetic_facial_image_generation Module](synthetic_facial_image_generator.md)
- [human_model_generation Module](human-model-generation.md)
- `utils` Module
- [Hyperparameter Tuning Module](hyperparameter_tuner.md)
- `Stand-alone Utility Frameworks`
- [Engine Agnostic Gym Environment with Reactive extension (EAGERx)](eagerx.md)
- `Stand-alone Utility Frameworks`
- [Engine Agnostic Gym Environment with Reactive extension (EAGERx)](eagerx.md)
- [ROSBridge Package](rosbridge.md)
- [C Inference API](c-api.md)
- [data.h](c-data-h.md)
Expand Down
58 changes: 58 additions & 0 deletions docs/reference/synthetic_facial_image_generator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
## synthetic_facial_image_generator module

The *synthetic_facial_image_generator* module contains the *MultiviewDataGeneration* class, which implements the multi-view facial image rendering operation.

### Class MultiviewDataGeneration

The *MultiviewDataGeneration* class is a wrapper of the Rotate-and-Render [[1]](#R-R-paper) photorealistic multi-view facial image generator based on the original
[Rotate-and-Render implementation](https://github.com/Hangz-nju-cuhk/Rotate-and-Render).
It can be used to perform multi-view facial image generation from a single view image on the wild (eval).
The [MultiviewDataGeneration](#projects.data_generation.synthetic-multi-view-facial-image-generation.3ddfa.SyntheticDataGeneration.py ) class has the
following public methods:

#### `MultiviewDataGeneration` constructor
```python
MultiviewDataGeneration(self, args)
```

Constructor main parameters *args* explanation:

- **path_in**: *str, default='./example/Images'* \
An absolute path (path in) which indicates the folder that contains the set of single view facial image snapshots to be processed by the algorithm.
- **path_3ddfa**: *str, default='./'* \
An absolute path (path 3ddfa) which indicates the 3ddfa module folder of the software structure as presented in the repository. This path is necessary in order for the software to create the folders for the intermediate / temporary storage of files generated during the pre-processing such as 3d face models, facial landmarks etc.
in the folder results of this path.
- **save_path**: *str, default='./results'* \
The output images are stored in the folder indicated by save path which is also a class input parameter.
- **val_yaw**: *str, default='10,20'* \
Definition of the yaw angles (in the interval [−90°,90°]) for which the rendered images will be produced.
- **val_pitch**: *str, default=' 30,40'* \
Definition of the pitch angles (in the interval [−90°,90°]) for which the rendered images will be produced.
- **device**: *{'cuda', 'cpu'}, default='cpu'* \
Specifies the device to be used.


#### `MultiviewDataGeneration.eval`
```python
MultiviewDataGeneration.eval()
```

This function is implementing the main procedure for the creation of the multi-view facial images, which consists of three different stages.
Instead of initializing the main parameters of the 3DDFA network in the intializer, the first stage includes detection of the candidate faces in the input images and 3D-head mesh fitting using 3DDFA.
Moreover, the second stage extracts the facial landmarks in order to derive the head pose and align the images with the 3d head model mesh.
Finally, the main functionality of the multiview facial image rendering is executed by loading the respective network parameters.

### Usage Example

```python
python3 tool_synthetic_facial_generation.py -path_in ./demos/imgs_input/ -path_3ddfa ./algorithm/DDFA/ -save_path ./results -val_yaw 10, 40 -val_pitch 10, 30 -device cuda
```
The corresponding paths for the input, output folders as well as the pitch and yaw angles for which the user wants to
produce the facial images can be easily incorporated in the class creation while the method is initialized.
The process is executed for the CNN parameters and GPUs specified in the arguments of the aforementioned command.
Users that wish to modify these parameters shall change the respective input arguments which derived from a parser including the arguments path in, path_3ddfa, save_path, val_yaw, val_pitch etc.

#### References
<a name="R-R-paper" href="https://github.com/Hangz-nju-cuhk/Rotate-and-Render">[1]</a>
Hang Zhou, Jihao Liu, Ziwei Liu, Yu Liu, Xiaogang Wang, Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images,
[arXiv](https://arxiv.org/abs/2003.08124#).
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Synthentic Multi-view Facial Image Generation based on Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images (CVPR 2020)

Based on: [[Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images]](https://arxiv.org/abs/2003.08124)

We utilize, with small modifications in order to be easily executed, publicly available code, namely an un-supervised framework that can synthesize photorealistic rotated facial images using as input a single facial image, or multiple such images (one per person).
The implemented method allows for rotating faces in the 3D space back and forth, and then re-rendering them to the 2D plane.
The generated multi-view facial images can be used for different learning tasks, such as in self-supervised learning tasks.

## Sources:
* Face Alignment in Full Pose Range: A 3D Total Solution (IEEE TPAMI 2017)
* Neural 3D Mesh Renderer (CVPR 2018)
* Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images (CVPR 2020)

## Requirements
* Python 3.6 is used. Basic requirements are listed in the 'requirements.txt'.

```
pip3 install -r requirements.txt
```
* Install the [Neural_Renderer](https://github.com/daniilidis-group/neural_renderer) following the instructions.
```
pip install git+https://github.com/cidl-auth/neural_renderer
```

* Download checkpoint and BFM model from [checkpoint.zip](ftp://opendrdata.csd.auth.gr/data_generation/synthetic_multi-view-facial-generator/ckpt_and_bfm.zip) put it in ```3ddfa``` and unzip it:
```bash
wget ftp://opendrdata.csd.auth.gr/data_generation/synthetic_multi-view-facial-generator/checkpoints.zip
unzip checkpoints.zip
unzip checkpoints/ckpt_and_bfm.zip -d 3ddfa
```
The 3D models are borrowed from [3DDFA](https://github.com/cleardusk/3DDFA).

* Compile cython code and download remaining models:
```bash
cd algorithm/DDFA/utils/cython/
python3 setup.py build_ext -i
cd ../../../..
mkdir algorithm/DDFA/models
mkdir algorithm/DDFA/example
wget https://github.com/cleardusk/3DDFA/blob/master/models/phase1_wpdc_vdc.pth.tar?raw=true -O algorithm/DDFA/models/phase1_wpdc_vdc.pth.tar
```

## Usage Example

1. Execute the one-step OPENDR function ```tool_synthetic_facial_generation.py``` specifying the input images folder, the output folder, the desired degrees (range -90 to 90) for generating the facial images in multiple view angles pitch and yaw as indicated in the command line:
```sh
python3 tool_synthetic_facial_generation.py -path_in ./demos/imgs_input/ -path_3ddfa ./algorithm/DDFA/ -save_path ./results -val_yaw 10, 40 -val_pitch 10, 30 -device cuda
```

3. The results can be found in ```results/rs_model/example/```, where multi-view facial images are generated for every person in a respective folder.

## License
Rotate-and-Render is provided under [CC-BY-4.0](https://github.com/Hangz-nju-cuhk/Rotate-and-Render/blob/master/LICENSE) license.
SPADE, SyncBN, 3DDFA are under [MIT License](https://github.com/tasostefas/opendr_internal/blob/synthetic-multi-view-facial-generator/projects/data_generation/synthetic-multi-view-facial-image-generation/3ddfa/LICENSE)

## Acknowledgement
Large parts of the code are taken from:
* The structure of this codebase is borrowed from [SPADE](https://github.com/NVlabs/SPADE).
* The [SyncBN](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch) module is used in the current code.
* The [3DDFA](https://github.com/cleardusk/3DDFA) implementation for 3D reconstruction.
* The code [Rotate-and-Render](https://github.com/Hangz-nju-cuhk/Rotate-and-Render/)

with the following modifications to make them compatible with the OpenDR specifications:
## Minor Modifications
1. All scripts: PEP8 changes
2. ```3ddfa/preprocessing_1.py, 3ddfa/preprocessing_2.py, test_multipose.py``` Modified to work as a callable functions
3. ```options/base_options.py, options/test_options.py ``` Commented out/change several parameters to be easily executed
4. ```models/networks/render.py``` Minor functional changes
5. The OPENDR created functions are ```SyntheticDataGeneration.py, tool_synthetic_facial_generation.py```
6. The rest are taken from the aforementioned repositories
Loading

0 comments on commit a3c31c0

Please sign in to comment.