Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about [Gaussian_yolo] #4190

Closed
Code-Fight opened this issue Oct 30, 2019 · 25 comments
Closed

about [Gaussian_yolo] #4190

Code-Fight opened this issue Oct 30, 2019 · 25 comments

Comments

@Code-Fight
Copy link

Hi, @AlexeyAB
Thanks for you share.
I found you add "[Gaussian_yolo]"
Can I train now?Any precautions?

Trhanks

@AlexeyAB
Copy link
Owner

AlexeyAB commented Oct 30, 2019

@Code-Fight Hi, yes.

  1. just use [Gaussian_yolo] instead of [yolo] in cfg-file

  2. And for [Gaussian_yolo] should be:
    filters = (classes + 8 + 1) * <numbers in mask>

    instead of (for [yolo]):
    filters = (classes + 4 + 1) * <numbers in mask>]

@Code-Fight
Copy link
Author

Hi @AlexeyAB ,Thanks for your reply.
I got it
However, when I went to train myself , the network could not converge and fluctuated around 10 rounds.
Excuse me, what is the reason?
Single card, 1080ti, lr=0.001
Thanks

@sctrueew
Copy link

@AlexeyAB Hi,

Is Gaussian_yolo layer support for the all cfg's?
SPP, TinyPRN and etc.

@AlexeyAB
Copy link
Owner

@zpmmehrdad Yes.
@Code-Fight Show chart.png

@Code-Fight
Copy link
Author

Hi @AlexeyAB ,Thanks
This is my cfg and chart.png

chart

sword_full_size_Gaussian.cfg.txt

@AlexeyAB
Copy link
Owner

Try to set learning_rate=0.0001 instead of learning_rate=0.001 and train

@Code-Fight
Copy link
Author

Hi @AlexeyAB I got it , I will train again
Thank you very much

@sctrueew
Copy link

@AlexeyAB Hi,

I just changed [Yolo] to [Gussian_Yolo] and When I use [Gussian_Yolo] for Yolov3_Tiny, I got this error:
l.outputs == params.inputs, file ....\src\parser.c, line 431

Thanks

@AlexeyAB
Copy link
Owner

@zpmmehrdad

For [Gaussian_yolo] should be:
filters = (classes + 8 + 1) * <numbers in mask>

instead of (for [yolo]):
filters = (classes + 4 + 1) * <numbers in mask>]

@Code-Fight
Copy link
Author

Code-Fight commented Oct 31, 2019

HI ,@AlexeyAB
I train again.
learning_rate=0.0001 instead of learning_rate=0.001
but, still the same problem.

0191031182523

What is the reason?

Thanks

@AlexeyAB
Copy link
Owner

This is normal for [Gaussian_yolo] layer. Check the mAP.
Or try to use even learning_rate=0.00001

@Code-Fight
Copy link
Author

@AlexeyAB Thanks , I will try again

@Code-Fight
Copy link
Author

Hi @AlexeyAB ,I train again with "map" .
I found the loss increase and "map" increase too.
Do you know what the reason is?
Thank you very much
chart

@AlexeyAB
Copy link
Owner

AlexeyAB commented Nov 1, 2019

@Code-Fight I don't know, may be delta of uncertainty is increasing https://github.com/AlexeyAB/darknet/blob/master/src/gaussian_yolo_layer.c#L182-L185

@Code-Fight
Copy link
Author

@AlexeyAB I got it
Thanks for your help
Thanks

@sctrueew
Copy link

sctrueew commented Nov 3, 2019

@Code-Fight Hi,

What cfg did you use?

@Code-Fight
Copy link
Author

Hi @zpmmehrdad
this cfg yolov3.cfg
and changed [Yolo] to [Gussian_Yolo]

@sctrueew
Copy link

sctrueew commented Nov 3, 2019

@Code-Fight Thanks, Did you check [Gussian_Yolo] with SPP?

@Code-Fight
Copy link
Author

@zpmmehrdad no, i didn't

@sctrueew
Copy link

sctrueew commented Nov 4, 2019

@Code-Fight Hi,

Could you share the mAP and FPS with [Gussian_Yolo] and with [Yolo] on your dataset?

@litingsjj
Copy link

@AlexeyAB Have you tried gaussian with yolov2?

@lq0104
Copy link

lq0104 commented Nov 19, 2019

Hi @AlexeyAB ,I train again with "map" .
I found the loss increase and "map" increase too.
Do you know what the reason is?
Thank you very much
chart

hi @Code-Fight would you show me your final learning rate, thx~

@Code-Fight
Copy link
Author

@lq0104 Hi,this is my cfg

batch=64
subdivisions=16
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
flip=0
learning_rate=0.00001
burn_in=1000

@lq0104
Copy link

lq0104 commented Nov 19, 2019

Yesterday I train my dataset with gaussian layer, as a result after hundreds of iterators the loss diverge to nan with learning rate=0.001, thanks again and i will try the new learning rate. :)

@Code-Fight
Copy link
Author

@zpmmehrdad HI,sorry,my reply too late.
[Gussian_Yolo] FPS 55+ 1080ti

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants