Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Illustrate performance gain of each pretrained models & how each models got lighter #17

Open
snoop2head opened this issue Dec 5, 2021 · 2 comments

Comments

@snoop2head
Copy link
Owner

For each pretrained models, display

  • wandb log
  • table which consists of maximum f1 score in 100 epochs and FLOPS(either inference time)
  • percentage gain in both metrics of f1 score and FLOPS(either inference time)
@snoop2head snoop2head changed the title Illustrate performance gain & how much model got lighter Illustrate performance gain of each pretrained models & how each models got lighter Dec 5, 2021
@lkm2835
Copy link
Collaborator

lkm2835 commented Dec 5, 2021

It seems important which device(CPU or GPU, v100 or smaller) will measure the inference time.

@snoop2head
Copy link
Owner Author

@lkm2835
I am thinking of Colab basic as a barometer for computing power.
According to stackoverflow post, Colab's computing power is

  • GPU: Teska K80
  • CPU: 1xsingle core hyper threaded Xeon Processors @2.3Ghz i.e(1 core, 2 threads)

But torch's version compatibility might cause a problem.
Affixing torch and torchvision version to colab's version is one of the candidate I am considering on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants