Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Early stop using the example file #3

Closed
frankfxb opened this issue May 2, 2017 · 3 comments
Closed

Early stop using the example file #3

frankfxb opened this issue May 2, 2017 · 3 comments

Comments

@frankfxb
Copy link

frankfxb commented May 2, 2017

Using the example training file bigdata.tr.txt and validation file bigdata.te.txt to perform the ffm test:

difacto local.conf data_in=data/bigdata.tr.txt data_val=data/bigdata.te.txt learner=ffmsgd  V_dim=4 max_num_epochs=10 batch_size=1000 has_aux=1 field_num=18

The program exits after only 2 epoch:

[12:23:58] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:80: Start epoch 0
    1        200    2e+02 |         0 | 0.6932  0.64692
[12:23:59] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:82: Epoch[0] Training: Rows = 200, loss = 0.693153, AUC = 0.646919
[12:24:01] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:87: Epoch[0] Validation: Rows = 200, loss = 0.693147, AUC = 0.619823
[12:24:01] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:80: Start epoch 1
    4        400    2e+02 |         0 | 0.6931  0.61127
[12:24:02] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:82: Epoch[1] Training: Rows = 200, loss = 0.693147, AUC = 0.611268
[12:24:04] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:87: Epoch[1] Validation: Rows = 200, loss = 0.693147, AUC = 0.619823
[12:24:04] /opt/codebase/dmlc/DiFacto2_ffm/src/sgd/sgd_learner.cc:94: Change of loss [8.80543e-06] < stop_rel_objv [1e-05]

It seems ffm can not converge during the training process

@CNevd
Copy link
Owner

CNevd commented May 3, 2017

@frankfxb try pull the newest code and disable stop criteria for auc
BTW use a larger dataset will be better

@frankfxb
Copy link
Author

frankfxb commented May 3, 2017

Thanks, it works now

It seems the optimizer of FTRL has been removed right now ?

@CNevd
Copy link
Owner

CNevd commented May 3, 2017

yes will be added later with run_yarn.sh demo @frankfxb

@frankfxb frankfxb closed this as completed May 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants