Skip to content

tony92151/pedestrian_generator

Repository files navigation

pedestrian_generator

This repo was first clone from GLCIC-PyTorch We change the loss, adding poisson_blend, use mask rcnn to get better pedestrian image. (Get better pedestrian image should be done by human. We can label little number of people and use this repo to get much more pedestrian data.) We truly get a better performance on the FPN detection model as you can see in the bottom of this README.

What we do?

iamge

Prepare training dataset

In our last version, we first convert Caltech Pedestrian Detection Benchmark dataset to image files by caltech_pedestrian_extractor(.seq to .jpg).

And we separate the caltech in two dataset.(The imgae which have pedestrian on i will disturb the result)

  1. already have pedestrian on the image > this can be used as baseline data as comparison.
  2. no pedestrian on the image > this can be use as training dataset.

⬇️have pedestrian ⬇️no pedestrian

see caltech_for_detectron.ipynb

And we also prepare the pedestrian data from Market-1501 Dataset dataset with mask which from datectron

see market_to_mask.ipynb

Finally, we have dataset from above to generate our training dataset. We randomly selecte three posisition where people will be pasted and record the coordinate, the scale and the index of people image in .json format. Each image have 2 or 3 people (half chance)

see gandatamask5_multi.ipynb

⬇️training dataset

In the training dataset

caltech_origin_mask8_42000.zip
├── street
├── people
├── mask
├── json
└── street_json

Training

In training step, we paste people in the center of the image.

and we have three phase.

phase 1 > training the generator
phase 2 > training the discriminator
phase 3 > training both generator and discriminator

iamge

see gandatamask5_multi.ipynb

Generate our new dataset to benchmark

see generator_v2.ipynb

Benchmark

Baseline: We use 42000 images which have pedestrian on the image in caltech dataset and training in 126000 iterations.

Our new dataset: We use 40000 images we generated and training in 30000 iterations. After that, we training 42000 images from caltech dataset and training in 80000 iterations.

Detectron2 Benchmark usage

iamge iamge iamge iamge

Contributors

Prepare training dataset
Training
Discuss
Presentation

Training
Benchmark
Discuss
Presentation

Prepare training dataset
Discuss
Presentation

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages