Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run evaluate.py have a problem #1

Closed
Faker0715 opened this issue Nov 12, 2022 · 3 comments
Closed

run evaluate.py have a problem #1

Faker0715 opened this issue Nov 12, 2022 · 3 comments

Comments

@Faker0715
Copy link

Traceback (most recent call last):
File "/home/faker/Desktop/code/DualMessagePassing-main/SubgraphCountingMatching/evaluate.py", line 439, in
eval_metric, eval_results = evaluate_epoch(
File "/home/faker/Desktop/code/DualMessagePassing-main/SubgraphCountingMatching/evaluate.py", line 119, in evaluate_epoch
pred_c, (pred_v, pred_e), ((p_v_mask, p_e_mask), (g_v_mask, g_e_mask)) = model(pattern, graph)
ValueError: too many values to unpack (expected 3)

@seanliu96
Copy link
Collaborator

Hi,

Thanks for creating an issue for evaluation. I fix this bug by removing the old functions. Please check the latest commit 275fab88447b991fcba0b457b9f3b275905816cc.

@Faker0715
Copy link
Author

thank you!

@seanliu96
Copy link
Collaborator

If this bug has been fixed, please help close this issue. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants