Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Add VQA V2.0 and Visual Dialog V0.9. #54
This pull request contains
To test the new added task
For VQA 2.0: python examples/display_data.py -t vqa_coco2014_v2 --download-path 'path_to_COCO_img'
For Visual Dialog: python examples/display_data.py -t visdial
Currently, the Visual Dialog inherit from the default "DialogTeacher" class. There is no placeholder for the additional image information. The "DialogData" class with the format [(x, y, r, c), new_episode?], could we extend this to [(x, y, r, c, i), new_episode?] where "i" is some optional additional information such as image_id? If so, I can send another pull request to modify that.
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. In order for us to review and merge your code, please sign up at https://code.facebook.com/cla - and if you have received this in error or have any questions, please drop us a line at email@example.com. Thanks!
If you are contributing on behalf of someone else (eg your employer): the individual CLA is not sufficient - use https://developers.facebook.com/opensource/cla?type=company instead. Contact firstname.lastname@example.org if you have any questions.
Thanks Jiasen!! I think we'd like to keep the dataset downloading the images automatically (to datapath)--I know it's large, but users should only need to do it once anyways. Everything in downloads is also automatically downloaded (currently just the Memnn github repo).
As far as DialogTeacher/DialogData, I think we want to write a new one instead that includes some of the data loading from your vqa_coco2014 (v1) and prepares us to handle the preprocessing options that we talked about before. We don't want image_id as part of the action/observation dictionary--just images themselves--so we want to include the image loading as part of that data class.
looking really good overall! how are you testing if the images are loading properly?
would be awesome to get a model in next so that we can try to train it and see if we can reproduce the results from the paper
yes, we need to have at least a simple example of visualizing the images..…
On Thu, May 11, 2017 at 11:37 AM, Alexander Miller ***@***.*** > wrote: ***@***.**** approved this pull request. looking really good overall! how are you testing if the images are loading properly? would be awesome to get a model in next so that we can try to train it and see if we can reproduce the results from the paper ------------------------------ In parlai/core/dialog_teacher.py <#54 (comment)> : > @@ -265,7 +265,7 @@ def get(self, episode_idx, entry_idx=0): table['reward'] = entry if len(entry) > 3: table['label_candidates'] = entry - if len(entry) > 4 and not opt.get('no_images', False): + if len(entry) > 4 and not self.opt.get('no_images', False): ah good catch thank you ------------------------------ In parlai/tasks/vqa_coco2014_v2/agents.py <#54 (comment)> : > + return shared + + def _setup_data(self, data_path, annotation_path): + print('loading: ' + data_path) + with open(data_path) as data_file: + self.ques = json.load(data_file) + + if self.datatype != 'test': + print('loading: ' + annotation_path) + with open(annotation_path) as data_file: + self.annotation = json.load(data_file) + + self.len = len(self.ques['questions']) + +class DefaultTeacher(OeTeacher): + pass does v2 have a multiple-choice version? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#54 (review)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AKjk-HHAo67A1CAmU1TddOwo2BCjWHSXks5r4ysogaJpZM4NTqgb> .
simple example which prints ascii version of images (after
import sys from asciimatics.renderers import ImageFile img = ImageFile(sys.argv, height=30) print(img)
maybe we want something like this in display_data?