Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about utils/evaluation.py #13

Open
lawlict opened this issue Aug 22, 2019 · 2 comments
Open

Question about utils/evaluation.py #13

lawlict opened this issue Aug 22, 2019 · 2 comments

Comments

@lawlict
Copy link

lawlict commented Aug 22, 2019

Hello @seungwonpark , thank you greatly for your work!
I notice that the utils/evalluation.py has a "break" in the loop of test dataloader.
That is, in the evaluation process, only the first case generated from test dataloader is taken account into computing test loss and test SDR. Could this raise problems like #5 and #9 ?

Look forward to your response.

@seungwonpark
Copy link
Contributor

Hi, @lawlict

Yes, our current SDR calculation process may cause SDR value to look strange since it evaluates only once.

@bigcash
Copy link

bigcash commented Nov 19, 2020

@lawlict from dataloader.py you can get the batchsize of testloader have only 1, so evalluation.py has a "break" may be ok

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants