Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rnnpool facedetection #215

Merged
merged 36 commits into from
Jan 4, 2021
Merged

Rnnpool facedetection #215

merged 36 commits into from
Jan 4, 2021

Conversation

oindrilasaha
Copy link
Contributor

added conference_room_m4 folder

Copy link
Collaborator

@harsha-simhadri harsha-simhadri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend consolidating the new Conference_Room_M4 folder into the Face_Detection folder. There are many duplicated files, or files with minor differences (see below for list). There are files that add new options to existing files -- these can be folded in. The entirely new files in Conference_Room_M4 folder can be just added as is to the Face_Detection folder.

The following files are the same between the Conference_Room_M4/ and Face_Detection/ folders.

  1. prepare_wider_data.py
  2. requirements.txt
  3. data/init.py
  4. data/choose_config.py
  5. data/ config.py
  6. dump_model.py
  7. layers/init.py
  8. layers/bbox_utils.py
  9. layers/functions/init.py
  10. layers/functions/detection.py
  11. layers/functions/prior_box.py
  12. layers/modules/init.py
  13. layers/modules/l2norm.py
  14. layers/modules/multibox_loss.py
  15. utils/init.py

The following files are ALMOST the same with minor edits.

  1. Conference_Room_M4/data/config_qvga.py adds ‘_C.FACE.SCUT_DIR = '/mnt/SCUT_HEAD_Part_B'
  2. Conference_Room_M4/data/widerface.py adds

< from data.choose_config import cfg
< cfg = cfg.cfg
18c16
< def init(self, list_file, mode='train', mono_mode=False, is_scut=False):

def __init__(self, list_file, mode='train', mono_mode=False):

< if is_scut==True:
< self.fnames.append(cfg.FACE.SCUT_DIR + '/' + line[0])
< else:
< self.fnames.append(line[0])

            self.fnames.append(line[0])
  1. Conference_Room_M4/eval.py adds

< default='weights/rpool_face_m4.pth', help='trained model')
< parser.add_argument('--thresh', default=0.5, type=float,

                default='weights/rpool_face_c.pth', help='trained model')

parser.add_argument('--thresh', default=0.17, type=float,
< default='RPool_Face_M4', type=str,
< choices=['RPool_Face_M4'],


                default='RPool_Face_C', type=str,
                choices=['RPool_Face_C', 'RPool_Face_Quant', 'RPool_Face_QVGA_monochrome'],
  1. Models/init.py can be consolidated to import 3 directories

The following files can be consolidated into one dir:

  1. M4/config_qvga.py can be moved to Face_Detection/
  2. M4/data/config_qvga.py can live inside Face_Detection/data/



if __name__ == '__main__':
train()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newline at the end of each file

examples/pytorch/vision/Conference_Room_M4/train.py Outdated Show resolved Hide resolved
examples/pytorch/vision/Conference_Room_M4/README.md Outdated Show resolved Hide resolved
examples/pytorch/vision/Conference_Room_M4/evaluation.py Outdated Show resolved Hide resolved
@@ -182,7 +182,9 @@ def sparsify(self):
mats[i] = utils.hardThreshold(mats[i], self._wSparsity)
for i in range(endW, endU):
mats[i] = utils.hardThreshold(mats[i], self._uSparsity)
self.copy_previous_UW()
self.W.data.copy_(mats[0])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have you tested that these changes do not break other rnnpool use cases?

@@ -8,25 +8,32 @@

class RNNPool(nn.Module):
def __init__(self, nRows, nCols, nHiddenDims,
nHiddenDimsBiDir, inputDims):
nHiddenDimsBiDir, inputDims,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any tests for this sparsity feature?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oindrilasaha : any thoughts?

Copy link
Contributor

@ShikharJ ShikharJ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few minor comments, else looks good.

examples/pytorch/vision/Face_Detection/README.md Outdated Show resolved Hide resolved
examples/pytorch/vision/Face_Detection/README_M4.md Outdated Show resolved Hide resolved
examples/pytorch/vision/Face_Detection/README_M4.md Outdated Show resolved Hide resolved
examples/pytorch/vision/Face_Detection/README_M4.md Outdated Show resolved Hide resolved
examples/pytorch/vision/Face_Detection/README_M4.md Outdated Show resolved Hide resolved
examples/pytorch/vision/Face_Detection/README_M4.md Outdated Show resolved Hide resolved
@ShikharJ
Copy link
Contributor

While unzipping Wider Face on Windows, I ran into File Name Too Long errors, so maybe we should document that as well for Windows users.

examples/pytorch/vision/Face_Detection/models/__init__.py Outdated Show resolved Hide resolved
_C.FACE.WIDER_DIR='/mnt/WIDER_FACE'.

3. Run
``` python prepare_wider_data.py ```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failed with the following error:
(py37) harshasi@GPUnode1:~/EdgeML/examples/pytorch/vision/Face_Detection$ python3 prepare_wider_data.py
Traceback (most recent call last):
File "prepare_wider_data.py", line 10, in
from data.config import cfg
File "/home/harshasi/EdgeML/examples/pytorch/vision/Face_Detection/data/init.py", line 4, in
from .widerface import WIDERDetection
File "/home/harshasi/EdgeML/examples/pytorch/vision/Face_Detection/data/widerface.py", line 10, in
from utils.augmentations import preprocess, preprocess_qvga
File "/home/harshasi/EdgeML/examples/pytorch/vision/Face_Detection/utils/init.py", line 4, in
from .augmentations import *
File "/home/harshasi/EdgeML/examples/pytorch/vision/Face_Detection/utils/augmentations.py", line 19, in
from data.choose_config import cfg
File "/home/harshasi/EdgeML/examples/pytorch/vision/Face_Detection/data/choose_config.py", line 7, in
IS_QVGA_MONO = os.environ['IS_QVGA_MONO']
File "/home/harshasi/anaconda3/envs/py37/lib/python3.7/os.py", line 678, in getitem
raise KeyError(key) from None
KeyError: 'IS_QVGA_MONO'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have updated readme to avoid error

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should ideally be a different flag right? Because this needs to be run even when the images aren't monochrome?

@@ -22,8 +23,12 @@ cd ..
That is, if the WIDER_FACE folder is created in /mnt folder, then _C.HOME='/mnt'
_C.FACE.WIDER_DIR='/mnt/WIDER_FACE'.
Similarly, change `data/config_qvga.py` to set _C.HOME and _C.FACE.WIDER_DIR.
For all following commands the environment variable IS_QVGA_MONO has to be set as 0 for using config.py and as 1 for using config_qvga.py as the configuration file.

Note that for Windows '/' should be replaced by '\' for each path in the config files.
Copy link
Contributor

@ShikharJ ShikharJ Nov 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'\' gets cleared in the README.md. Use '\\' here please.

Copy link
Collaborator

@harsha-simhadri harsha-simhadri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was able to follow readme_M4. Have some suggestions for improving the file. Would recommend testing by another user as well. Will review rest of files shortly.

@ShikharJ
Copy link
Contributor

@harsha-simhadri Can we get this merged? I need this either rebased to master or merged in for the blog.



##### Dump RNNPool Input Output Traces and Weights

To save model weights and/or input output pairs for each patch through RNNPool in numpy format use the command below. Put images which you want to save traces for in <your_image_folder> . Specify output folder for saving model weights in numpy format in <your_save_model_numpy_folder>. Specify output folder for saving input output traces of RNNPool in numpy format in <your_save_traces_numpy_folder>. Note that input traces will be saved in a folder named 'inputs' and output traces in a folder named 'outputs' inside <your_save_traces_numpy_folder>.
For saving model weights and/or input output pairs for each patch through RNNPool in numpy format use the command below. Put images which you want to save traces for in <your_image_folder> . Specify output folder for saving model weights in numpy format in <your_save_model_numpy_folder>. Specify output folder for saving input output traces of RNNPool in numpy format in <your_save_traces_numpy_folder>. Note that input traces will be saved in a folder named 'inputs' and output traces in a folder named 'outputs' inside <your_save_traces_numpy_folder>.

```shell
python3 dump_model.py --model ./weights/RPool_Face_QVGA_monochrome_best_state.pth --model_arch RPool_Face_Quant --image_folder <your_image_folder> --save_model_npy_dir <your_save_model_numpy_folder> --save_traces_npy_dir <your_save_traces_numpy_folder>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the above command, the weight being used is RPool_Face_QVGA_monochrome_best_state.pth, but the model arch being used is RPool_Face_Quant?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oindrilasaha I suppose this is just out of place naming, right? Or you actually using a different set of weights?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not using different weights, ill update the readme

@harsha-simhadri harsha-simhadri merged commit e4d5255 into master Jan 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants