-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
verbalization #17
Comments
Hi @SapaePhyu ! The eassiest way to reproduce the results is by running the following command: python -m a2t.evaluation --config resources/predefined_configs/{DATASET}.arguments.config.json Where However, answering your question, the templates contain placeholders for different information, such as the event trigger ( text = "..."
template = "{arg} bought something."
verbalization = template.format(**{"arg": "John D. Idol", "trg": "hired", "trg_type": "..."})
model_input = "{original_sentence} </s> {verbalization}".format(original_sentence=text, verbalization=verbalization) Note that this is a very simplified example. You can see the templates on the appendix of the paper or in the task configs at resources/predefined_configs/. |
Thank you, Sainz! Is it correct to think that the inputs of placeholders are the outputs of NER? If so, is there any constraint of selecting the entities, or do the placeholders accept all entities one by one? |
Yes! The actual models perform trigger-entity classification in order to assign role fillers to the events. Depending on your evaluation you would want to use gold entities or some entities predicted from a NER model. Regarding the constraints, in the configuration file for each dataset there are defined what we call |
Thank you so much for your detailed explanation!! |
Hi Sainz,
I am trying to reproduce results on your paper "Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning", but cannot find the method of verbalization either in the paper or code. In Label verbalization section, you mentioned that "A verbalization is generated using templates that have been manually written based on the task guidelines of each dataset.", may I know how a sentence is generated after giving the model labels, template and original sentence? For example, when filling the template of " bought something" in Figure 1, how does the model know to choose "John D. Idol" but not "hired"? Please kindly reply when you have time.
The text was updated successfully, but these errors were encountered: