-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More Guidance Needed on Training Models on Own Datasets #2
Labels
Comments
Thanks very much for your many useful suggestions and I respond to your suggestions as follows.
|
Sounds great! Thank you so much for taking the time to share these details. Looking forward to the updates! |
AnselCmy
added
topic: documentation
Question about documentation
status: in progresss
and removed
documentation
labels
Mar 18, 2022
We have updated what was to be done in the last response.
For more updates please see our news, this issue will be closed. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Would it be possible for you to provide more details on how to train various models on users' own datasets?
It's not clear what has to go into the config files (e.g., what specifically should be mentioned for
env
,interpreter
,program
, orargs
(or whyprogram
appears twice in the config).One thing that may be helpful is in the docs, you share results of the library on various datasets (https://zjukg.github.io/NeuralKG/result.html). If you could provide the command you used to run each of pipelines, that would be great.
Also, it's not clear from the docs how one must treat the data loaders differently compared to tabular data, as the examples in the docs refer to image datasets: https://zjukg.github.io/NeuralKG/neuralkg.data.html#neuralkg.data.base_data_module.BaseDataModule.train_dataloader. What must the structure of datasets be for various models? What can be done to datasets to better prepare them for different models (e.g., encoding entities/relations, etc.)
The text was updated successfully, but these errors were encountered: