Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

action_evaluation expected file format #16

Closed
seo-95 opened this issue Aug 3, 2020 · 4 comments
Closed

action_evaluation expected file format #16

seo-95 opened this issue Aug 3, 2020 · 4 comments

Comments

@seo-95
Copy link
Contributor

seo-95 commented Aug 3, 2020

Hi,
I am trying to evaluate my model results on fashion dataset by using the script mm_action_prediction/tools/action_evaluation.py, but I have not understood the file format reported in mm_action_prediction/README.md.

  1. <action_token>: <action_log__prob>, with action_token you mean the name of the action?

  2. <attribute_label>: <attribute_val>, I have a vague idea of what attribute_label is by looking at the list IGNORE_ATTRIBUTES but it is not clear what to insert as attribute_val. For each dialogue turn, I predict as multilabel a series of arguments but I have not understood what you mean with label and arguments. Can you please better explain with an example what is a label versus a value for an attribute in the fashion dataset?

  3. I have filled the dict in this way for a particular turn of a dialogue.

{'action': 'SearchDatabase', 'action_log_prob': {'None': -2.091989517211914, 'SearchDatabase': -0.17580336332321167, 'SearchMemory': -3.525150775909424, 'SpecifyInfo': -4.88762903213501, 'AddToCart': -7.144548416137695}, 'attributes': {}}

Anyway, I have issues with the script because attributes field is empty and the script does round_datum["attributes"][key].
How to fill the output json in the case no arguments have been found for a particular dialogue turn?

@satwikkottur
Copy link
Contributor

Hello @seo-95 ,

  1. That is correct. action_token is indeed the name of the action, for example, SearchDataset.
  2. This is a good catch, I'll push a fix to check to account for empty attributes key.

@seo-95
Copy link
Contributor Author

seo-95 commented Aug 3, 2020

Thank you @satwikkottur. Can you also please provide an example of attribute label vs attribute value? Thanks

@seo-95
Copy link
Contributor Author

seo-95 commented Aug 4, 2020

I debugged the action_evaluation.py and I have noticed that it was a fault of mine. The attributes field in the model output JSON can not be empty, instead, it contains several keys, each one mapping to a list, not a single value as described in the mm_action_prediction/README.md. The right format is then:

{'action': 'SearchDatabase', 'action_log_prob': {'None': -2.091989517211914, 'SearchDatabase': -0.17580336332321167, 'SearchMemory': -3.525150775909424, 'SpecifyInfo': -4.88762903213501, 'AddToCart': -7.144548416137695}, 'attributes': {'attributes': []}}

This issue can be closed after a better description of attribute_label and attribute_val in the README. Thank you.

@satwikkottur
Copy link
Contributor

On taking a closer look, attributes contains all the keys (API attributes) required to be predicted for a given API. The part of the code you're referring to (here) is reached only when the predicted action matches with the ground truth.

I'll add the description of attribute_label and attribute_val in the README.md. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants