Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the reproduce results of deepex #4

Closed
EternalEep opened this issue Dec 11, 2021 · 3 comments
Closed

About the reproduce results of deepex #4

EternalEep opened this issue Dec 11, 2021 · 3 comments

Comments

@EternalEep
Copy link

Hi, thank you for your good work in OIE2016, but I use your default parameters with 1 V100 GPU, but I get the following results:

image

Did I have some mistakes that I cannot get the 0.72 F1?

Also, I find that the constractive pretrain deep-ranking-model has not been released and the code is inference code now. It needs about 3 hours to test. Am I right?

Hope for your reply!

@HaoyunHong
Copy link

Hi @EternalEep thanks very much for your interest! We are taking a look at the reproduce mismatch issue.

The deep-ranking-model is released at https://huggingface.co/Magolor/deepex-ranking-model, which is shown in https://github.com/cgraywang/deepex/blob/a4a4cf60c96e1bfe3ddc8007498bf5ed783af730/scripts/ranking.py#L33 and you can directly use it in our code (with transformers package of huggingface).

The pretrain code of deep-ranking-model will be released later. Yes, it needs 3 hours to test with 1 V100 GPU.

@EternalEep
Copy link
Author

Thank you very much, I guess the problem of mismatch may be the environment, such as version of some packages.

@cgraywang
Copy link
Collaborator

@EternalEep Thanks, I will close the issue for now. If there are additional ones, please free feel to reach out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants