Skip to content
This repository has been archived by the owner on Jan 1, 2024. It is now read-only.

questions about spj_rand #12

Open
WenxiongLiao opened this issue Jun 8, 2022 · 1 comment
Open

questions about spj_rand #12

WenxiongLiao opened this issue Jun 8, 2022 · 1 comment

Comments

@WenxiongLiao
Copy link

WenxiongLiao commented Jun 8, 2022

  1. What does spj_rand mean? Is spj_rand the same as ssg+spj? if no, how can I run ssg+spj?

2.I'm going to reproduce the results of the ACL paper, I have executed 'bash scripts/experiments_ours.sh v2.4_25' and 'bash scripts/experiments_baseline.sh v2.4_25'. After execute 'python -m neuraldb.final_scoring' I got this results:
image

The result here is only minmax/set/bool/count. How to output the results related to Atomic and join?

@j6mes
Copy link
Contributor

j6mes commented Jun 8, 2022

The spj_rand is the model you'll want to use for end-to-end evaluation. This is what is reported in the paper and has the negative random sampling augmentation in training. The variant spj has no negative sampling and doesn't perform well when testing with external data.

The ssg+spj model is taking the output of the SSG model and inputting this into the pre-trained spj_rand model.

The atomic/join is orthogonal to these query types and wasn't reported in the ACL paper. Just the VLDB and arxiv pre-prints. You could add a feature to each instance counting the number of facts used. If this is == 1, then it is atomic, if it is >= 1 it is join. This should be sufficient.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants