Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model test error is too high #29

Closed
Alice314 opened this issue Nov 3, 2020 · 7 comments
Closed

Model test error is too high #29

Alice314 opened this issue Nov 3, 2020 · 7 comments

Comments

@Alice314
Copy link

Alice314 commented Nov 3, 2020

I'm sorry to bother you.But when I tried to replicate your work,I ran into some difficulties.
Here is the problem I met:

python src/main.py
+---------------------+------------------+
| Batch size | 128 |
+=====================+==================+
| Bins | 16 |
+---------------------+------------------+
| Bottle neck neurons | 16 |
+---------------------+------------------+
| Dropout | 0.500 |
+---------------------+------------------+
| Epochs | 5 |
+---------------------+------------------+
| Filters 1 | 128 |
+---------------------+------------------+
| Filters 2 | 64 |
+---------------------+------------------+
| Filters 3 | 32 |
+---------------------+------------------+
| Histogram | 0 |
+---------------------+------------------+
| Learning rate | 0.001 |
+---------------------+------------------+
| Tensor neurons | 16 |
+---------------------+------------------+
| Testing graphs | ./dataset/test/ |
+---------------------+------------------+
| Training graphs | ./dataset/train/ |
+---------------------+------------------+
| Weight decay | 0.001 |
+---------------------+------------------+

Enumerating unique labels.

100%|██████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2533.57it/s]

Model training.

Epoch: 0%| | 0/5 [00:00<?, ?it/s]
/home/jovyan/SimGNN/src/simgnn.py:212: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they havethe same size.
losses = losses + torch.nn.functional.mse_loss(data["target"], prediction)
Epoch (Loss=3.87038): 100%|██████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.23s/it]
Batches: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.68s/it]

Model evaluation.

100%|█████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 102.39it/s]

Baseline error: 0.41597.

Model test error: 0.94024.

I found the model test error too high! The only thing I changed was the version of the libraries,which I replaced with the latest.
Could you help me with this problem?

@nachofest
Copy link

I also have the same issue, I have a feeling it might be due to the warning given in the output.

@Alice314
Copy link
Author

I also have the same issue, I have a feeling it might be due to the warning given in the output.
Have you solved the problem yet?

@nachofest
Copy link

I fixed the "warning" (assuming it was an unintended error") but the model error is still the same, too high.

@Erlix322
Copy link

Any Update on this? I tried it too and it is way too high. Also the amount of unique labels doesn't fit the number in the readme? 100/100 vs 11000 in the readme

@Erlix322
Copy link

Ok so after reading a few other issues it is due to synthetic generated graphs, right? So since the question arises from time to time it might be useful to provide the whole dataset by the author @benedekrozemberczki to reproduce similar results.

@benedekrozemberczki
Copy link
Owner

benedekrozemberczki commented Nov 25, 2020 via email

@Alice314
Copy link
Author

Alice314 commented Nov 25, 2020

Yes. However, uploading a larger set of JSONs here would not be feasible as GitHub limits the number of files. Adding a compressed file might help, but based on 2 years of experience in open source people do not tend to look ad the readme files.

On Wed, 25 Nov 2020 at 08:15, Erlix322 @.***> wrote: Ok so after reading a few other issues it is due to synthetic generated graphs, right? So since the question arises from time to time it might be useful to provide the whole dataset by the author @benedekrozemberczki https://github.com/benedekrozemberczki to reproduce similar results. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#29 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEETMF6V3SXOSZXNIOEOPR3SRS4IJANCNFSM4TIV4SBA .

maybe you r right, but when i tried to set the training dataset and the test dataset to be the same,i found that the result was still wrong.So I do not think it's a problem with the dataset.@benedekrozemberczki

python src/main.py
+---------------------+------------------+
| Batch size | 128 |
+=====================+==================+
| Bins | 16 |
+---------------------+------------------+
| Bottle neck neurons | 16 |
+---------------------+------------------+
| Dropout | 0.500 |
+---------------------+------------------+
| Epochs | 5 |
+---------------------+------------------+
| Filters 1 | 128 |
+---------------------+------------------+
| Filters 2 | 64 |
+---------------------+------------------+
| Filters 3 | 32 |
+---------------------+------------------+
| Histogram | 0 |
+---------------------+------------------+
| Learning rate | 0.001 |
+---------------------+------------------+
| Tensor neurons | 16 |
+---------------------+------------------+
| Testing graphs | ./dataset/train/ |
+---------------------+------------------+
| Training graphs | ./dataset/train/ |
+---------------------+------------------+
| Weight decay | 0.001 |
+---------------------+------------------+

Enumerating unique labels.

100%|█████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 12001.56it/s]

Model training.

Epoch: 0%| | 0/5 [00:00<?, ?it/s]
/home/jovyan/SimGNN/src/simgnn.py:221: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they havethe same size.
losses = losses + torch.nn.functional.mse_loss(data["target"], prediction)
Epoch (Loss=2.87421): 100%|██████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.78it/s]
Batches: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.07it/s]

Model evaluation.

100%|█████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 268.85it/s]

Baseline error: 0.48942.

Model test error: 0.84929.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants