This paper, "Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative Multimodal Prompt", is accepted by the Findings of ACL 2023. The code will come soon.
The image-text data that is used in our paper can be downloaded from Google Drive.
To Train the JMASA, MASC, and MATE tasks on two twitter datasets, you can just run the following code. Note that you need to change all the file path in file "GMP/src/data/jsons/few_shot_for_prompt/twitter_2015/" and "GMP/src/data/jsons/few_shot_for_prompt/twitter17_info.json" to your own path.
sh scripts/15MATE_pretrain_for_generated_prompt_multitasks.sh
sh scripts/17MATE_pretrain_for_generated_prompt_multitasks.sh
sh scripts/15_pretrain_full_for_generated_dual_prompts_multitasks_Aspect.sh
sh scripts/17_pretrain_full_for_generated_dual_prompts_multitasks_Aspect.sh
sh scripts/15MASC_pretrain_for_generated_prompt.sh
sh scripts/17MASC_pretrain_for_generated_prompt.sh
Some codes are based on the codes of VLP-MABSA, many thanks!