Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How many pictures you are using for training? #13

Closed
lyyiangang opened this issue Jan 17, 2019 · 7 comments
Closed

How many pictures you are using for training? #13

lyyiangang opened this issue Jan 17, 2019 · 7 comments

Comments

@lyyiangang
Copy link

Hi sir.
I see your gaze demo on youtube, it's amazing. I download you pre-trained model and test on my own videos, but don't get good results. my questions are:

  1. How many pictures are used for your pre-trained model?
  2. which gaze method is used in your video demo? feature based method or model based method?

thanks very much

@swook
Copy link
Owner

swook commented Jan 17, 2019

Hi, thanks for the kind comments.

  1. I believe I used almost a million images for training.
  2. The video uses a feature-based (SVR) method trained on MPIIGaze - if I remember correctly. The reference implementation in this repository does not do this and relies on the somewhat inaccurate estimation of eyeball center and radius.

@lyyiangang
Copy link
Author

Hi, thanks for the kind comments.

  1. I believe I used almost a million images for training.
  2. The video uses a feature-based (SVR) method trained on MPIIGaze - if I remember correctly. The reference implementation in this repository does not do this and relies on the somewhat inaccurate estimation of eyeball center and radius.

thanks very much for your reply. it seems I need generate more pictures for training.
Thanks very much.

@XhqGlorry11
Copy link

@swook Hi, according to the checkpoint you provide, do you train the model for more than 4 million steps? Assuming your batch size is 32, that means your model have seen more than 120 million images???

@swook
Copy link
Owner

swook commented Mar 29, 2019 via email

@MinjingLin
Copy link

Hi, thanks for the kind comments.

  1. I believe I used almost a million images for training.
  2. The video uses a feature-based (SVR) method trained on MPIIGaze - if I remember correctly. The reference implementation in this repository does not do this and relies on the somewhat inaccurate estimation of eyeball center and radius.

Hi, I got a question that it's a million images before Training Data Augmentation or after augmentation ? And I use UnityEyes to generate almost 140 thousand images, it's up to 1 million after augmentation. Then I run the egg_train.py and got this problem:


10/04 06:34 INFO 0079261> heatmaps_mse = 0.00100194, radius_mse = 1.17517e-07
10/04 06:34 INFO 0079270> heatmaps_mse = 0.00119301, radius_mse = 8.82096e-08
10/04 06:34 INFO 0079280> heatmaps_mse = 0.00114937, radius_mse = 1.55061e-07
10/04 06:34 INFO 0079289> heatmaps_mse = 0.00109943, radius_mse = 1.84821e-07
Exception in thread preprocess_UnityEyes_27:
Traceback (most recent call last):
File "/home/wang/anaconda3/envs/tensorflow-gpu/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/wang/anaconda3/envs/tensorflow-gpu/lib/python3.5/threading.py", line 862, in run
self._target(self._args, **self._kwargs)
File "/media/wang/Toshiba/lmj/2019term/papers/GazeML/GazeML-win/src/core/data_source.py", line 245, in preprocess_job
preprocessed_entry_dict = self.preprocess_entry(raw_entry)
File "/media/wang/Toshiba/lmj/2019term/papers/GazeML/GazeML-win/src/datasources/unityeyes.py", line 237, in preprocess_entry
thickness=int(6
line_rand_nums[j + 4]), lineType=cv.LINE_AA)
cv2.error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/drawing.cpp:1811: error: (-215:Assertion failed) 0 < thickness && thickness <= MAX_THICKNESS in function 'line'

@swook
Copy link
Owner

swook commented Apr 10, 2019

I used approx. 1 million images before augmentation. The augmentation scheme is live during the training of the model and thus results in an effectively uncountable training set.

Please open a separate issue for the error you ran into.

@TulipDi
Copy link

TulipDi commented Jun 25, 2019

  1. I believe I used almost a million images for training.
  2. The video uses a feature-based (SVR) method trained on MPIIGaze - if I remember correctly. The reference implementation in this repository does not do this and relies on the somewhat inaccurate estimation of eyeball center and radius.

Hi, i have a question that how train SVR on MPIIGaze. I get MPIIGaze by 'get_mpiigaze_hdf.bash '. Then I found this dataset does not have 'landmark'. Wait for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants