Skip to content
Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images
Branch: master
Clone or download
Pull request Compare This branch is 4 commits ahead, 2 commits behind hwang1996:master.
Latest commit 5e89897 May 13, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data first commit Mar 25, 2019
imgs Delete cvpr_fig.pdf Mar 26, 2019
LICENSE
README.md
args.py first commit Mar 25, 2019
build_vocab.py first commit Mar 25, 2019
data_loader.py first commit Mar 25, 2019
models.py first commit Mar 25, 2019
test.py first commit Mar 25, 2019
train.py Update train.py Mar 26, 2019
triplet_loss.py first commit Mar 25, 2019

README.md

Adversarial Networks for Cross-Modal Food Retrieval

Codes of ACME (PyTorch)

Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images
Wang Hao, Doyen Sahoo, Chenghao Liu, Ee-peng Lim, Steven C. H. Hoi
CVPR 2019

outline

If you find this code useful, please consider citing:

@article{wang2019learning,
  title={Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images},
  author={Wang, Hao and Sahoo, Doyen and Liu, Chenghao and Lim, Ee-peng and Hoi, Steven CH},
  journal={arXiv preprint arXiv:1905.01273},
  year={2019}
}

Our work is an extension of im2recipe, where you can borrow some food data pre-processing methods.

Installation

We use pytorch v0.5.0 and python 3.5.2 in our experiments.
You need to download the Recipe1M dataset from here first.

Training

Train the ACME model:

CUDA_VISIBLE_DEVICES=0 python train.py 

We did the experiments with batch size 64, which takes about 12 GB memory.

Model for Testing

Test the model:

CUDA_VISIBLE_DEVICES=0 python test.py

Pre-trained models can be downloaded from Google Drive.

You can’t perform that action at this time.