Skip to content
Permalink
Browse files

commit

  • Loading branch information...
liyiying committed May 13, 2019
0 parents commit aeb5d9f3f40deefbf6093b6f6206c3b1a0e8abef
Showing with 2,885 additions and 0 deletions.
  1. BIN .DS_Store
  2. +12 −0 .idea/CR_0509_Feature_Critic_VD.iml
  3. +4 −0 .idea/encodings.xml
  4. +4 −0 .idea/misc.xml
  5. +8 −0 .idea/modules.xml
  6. +544 −0 .idea/workspace.xml
  7. +57 −0 README.md
  8. 0 data_process/__init__.py
  9. BIN data_process/__init__.pyc
  10. BIN data_process/__pycache__/__init__.cpython-37.pyc
  11. BIN data_process/__pycache__/data_gen_PACS.cpython-37.pyc
  12. BIN data_process/__pycache__/data_gen_VD.cpython-37.pyc
  13. +112 −0 data_process/data_gen_PACS.py
  14. BIN data_process/data_gen_PACS.pyc
  15. +144 −0 data_process/data_gen_VD.py
  16. BIN data_process/data_gen_VD.pyc
  17. +35 −0 get_model_dataset.sh
  18. BIN logs/.DS_Store
  19. +130 −0 main_Feature_Critic.py
  20. +101 −0 main_baseline.py
  21. +452 −0 model_PACS.py
  22. BIN model_PACS.pyc
  23. +607 −0 model_VD.py
  24. BIN model_VD.pyc
  25. BIN model_output/.DS_Store
  26. +5 −0 networks/__init__.py
  27. BIN networks/__init__.pyc
  28. BIN networks/__pycache__/__init__.cpython-36.pyc
  29. BIN networks/__pycache__/__init__.cpython-37.pyc
  30. BIN networks/__pycache__/alexnet.cpython-37.pyc
  31. BIN networks/__pycache__/lenet.cpython-36.pyc
  32. BIN networks/__pycache__/lenet.cpython-37.pyc
  33. BIN networks/__pycache__/resnet.cpython-36.pyc
  34. BIN networks/__pycache__/resnet.cpython-37.pyc
  35. BIN networks/__pycache__/vggnet.cpython-36.pyc
  36. BIN networks/__pycache__/vggnet.cpython-37.pyc
  37. BIN networks/__pycache__/wide_resnet.cpython-36.pyc
  38. BIN networks/__pycache__/wide_resnet.cpython-37.pyc
  39. +66 −0 networks/alexnet.py
  40. BIN networks/alexnet.pyc
  41. +29 −0 networks/lenet.py
  42. BIN networks/lenet.pyc
  43. +222 −0 networks/resnet.py
  44. BIN networks/resnet.pyc
  45. +80 −0 networks/vggnet.py
  46. BIN networks/vggnet.pyc
  47. +89 −0 networks/wide_resnet.py
  48. BIN networks/wide_resnet.pyc
  49. +184 −0 utils.py
  50. BIN utils.pyc
BIN +6 KB .DS_Store
Binary file not shown.
@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="jdk" jdkName="Python 3.7 (origin)" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
<component name="TestRunnerService">
<option name="projectConfiguration" value="pytest" />
<option name="PROJECT_TEST_RUNNER" value="pytest" />
</component>
</module>
@@ -0,0 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="Encoding" addBOMForNewFiles="with NO BOM" />
</project>
@@ -0,0 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.7 (origin)" project-jdk-type="Python SDK" />
</project>
@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/CR_0509_Feature_Critic_VD.iml" filepath="$PROJECT_DIR$/.idea/CR_0509_Feature_Critic_VD.iml" />
</modules>
</component>
</project>

Large diffs are not rendered by default.

@@ -0,0 +1,57 @@
# Feature_Critic_VD
Demo code for 'Feature-Critic Networks for Heterogeneous Domain Generalization'
. The paper is located at https://arxiv.org/abs/1901.11448.

> Yiying Li, Yongxin Yang, Wei Zhou, Timothy M. Hospedales. Feature-Critic Networks for Heterogeneous Domain Generalization[C]. ICML 2019.
### Dataset
The example code mainly reproduces the experimental results of Heterogeneous DG experiments with VD,
so it is necessary to download the corresponding data set from the official website(https://www.robots.ox.ac.uk/%7Evgg/decathlon/).
please download the following files:
```
(1) Annotations and code. The devkit [22MB] contains the annotation files as well as example MATLAB code for evaluation (using this code is not a requirement).
(2) Images. The following archives contain the preprocessed images for each dataset:
Preprocessed images [406MB]. Images from all datasets except ImageNet ILSVRC.
Preprocessed ILSVRC images [6.1GB]. In order to download the data, the attendees are required to register an ImageNet (http://image-net.org/signup) account first. Images for the ImageNet ILSVRC dataset (this is shipped separately due to copyright issues).
```

### Installation

Install Anaconda:
```
curl -o /tmp/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
bash /tmp/miniconda.sh
conda create -n FC_VD python=2.7.12
source activate FC_VD
```
Install necessary Python packages:
```
pip install torchvision pycocotools torch
```

### Running
First go to the Feature_Critic_VD code folder:
```
cd <path_to_Feature_Critic_VD_folder>
```
Then launch the entry script of baseline method:
```
python main_baseline.py
```
Experiment data is saved in `<home_dir>/logs`.

Run the Feature_Critic_VD:
```
python main_Feature_Critic.py
```
### Bibtex
```
@inproceedings{Li2019ICML,
title={Feature-Critic Networks for Heterogeneous Domain Generalization},
author={Li, Yiying and Yang, Yongxin and Zhou, Wei and Hospedales, Timothy},
booktitle={The Thirty-sixth International Conference on Machine Learning},
year={2019}
}
```
### Your own data
Please tune the folder <VD> for your own data.
No changes.
BIN +150 Bytes data_process/__init__.pyc
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,112 @@
from pycocotools.coco import COCO
import numpy as np
from PIL import Image
import torchvision.transforms as transforms
from utils import unfold_label, shuffle_data
transform_train = transforms.Compose([
#transforms.RandomHorizontalFlip(p=0.75),
#transforms.RandomCrop(224, padding=4),
#transforms.CenterCrop(224),
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])


transform_test = transforms.Compose([
#transforms.CenterCrop(224),
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

def get_domain_name():
return {'0': 'photo', '1': 'art_painting', '2': 'cartoon', '3': 'sketch'}

def get_data_folder():
data_folder = 'data/PACS/pacs_label/'
train_data = ['photo_train_kfold.txt',
'art_painting_train_kfold.txt',
'cartoon_train_kfold.txt',
'sketch_train_kfold.txt']

val_data = ['photo_crossval_kfold.txt',
'art_painting_crossval_kfold.txt',
'cartoon_crossval_kfold.txt',
'sketch_crossval_kfold.txt']

test_data = ['photo_test_kfold.txt',
'art_painting_test_kfold.txt',
'cartoon_test_kfold.txt',
'sketch_test_kfold.txt']
return data_folder, train_data, val_data, test_data

class BatchImageGenerator:
def __init__(self, flags, stage, file_path, metatest, b_unfold_label):

if stage not in ['train', 'val', 'test']:
assert ValueError('invalid stage!')

self.configuration(flags, stage, file_path, metatest)
self.load_data(b_unfold_label)

def configuration(self, flags, stage, file_path, metatest):
if metatest == False:
self.batch_size = flags.batch_size
if metatest == True:
self.batch_size = flags.batch_size_metatest
self.current_index = -1
self.file_path = file_path
self.stage = stage
self.shuffled = False

def load_data(self, b_unfold_label):
file_path = self.file_path
images = []
labels = []
with open(file_path,'r') as file_to_read:
while True:
lines = file_to_read.readline()
if not lines:
break
pass
image, label = [i for i in lines.split()]
images.append(image)
labels.append(int(label)-1)
pass
if b_unfold_label:
labels = unfold_label(labels=labels, classes=len(np.unique(labels)))
self.images = np.array(images)
self.labels = np.array(labels)
self.file_num_train = len(self.labels)

if self.stage is 'train':
self.images, self.labels = shuffle_data(samples=self.images, labels=self.labels)

def get_images_labels_batch(self):

images = []
labels = []
for index in range(self.batch_size):
self.current_index += 1
# void over flow
if self.current_index > self.file_num_train - 1:
self.current_index %= self.file_num_train
self.images, self.labels = shuffle_data(samples=self.images, labels=self.labels)
img = Image.open('data/PACS/pacs_data/'+self.images[self.current_index])
img = img.convert('RGB')
img = transform_train(img)
img = np.array(img)
images.append(img)
labels.append(self.labels[self.current_index])

return np.array(images), np.array(labels)

def get_image(images):
images_data = []
for img in images:
img = Image.open('data/PACS/pacs_data/'+img)
img = transform_train(img)
img = np.array(img)
images_data.append(img)
return np.array(images_data)
Binary file not shown.
@@ -0,0 +1,144 @@
from pycocotools.coco import COCO
import numpy as np
from utils import unfold_label, shuffle_data
from PIL import Image
import torchvision.transforms as transforms

transform_train = transforms.Compose([
transforms.Resize((72,72)),
transforms.RandomHorizontalFlip(p=0.75),
transforms.RandomCrop(72, padding=4),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

transform_test = transforms.Compose([
transforms.Resize((72,72)),
#transforms.RandomCrop(64, padding=4),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

def get_domain_name():
return {'0': 'cifar100', '1': 'daimlerpedcls', '2': 'gtsrb', '3': 'omniglot', '4': 'svhn', '5': 'imagenet12',
'6': 'aircraft', '7': 'dtd', '8': 'vgg-flowers', '9': 'ucf101'}

def get_data_folder():
data_folder = 'data/VD/decathlon-1.0/annotations/'
train_data = ['cifar100_train.json',
'daimlerpedcls_train.json',
'gtsrb_train.json',
'omniglot_train.json',
'svhn_train.json',
'imagenet12_train.json',
'aircraft_train.json', # for test domains.
'dtd_train.json',
'vgg-flowers_train.json',
'ucf101_train.json']

val_data = ['cifar100_val.json',
'daimlerpedcls_val.json',
'gtsrb_val.json',
'omniglot_val.json',
'svhn_val.json',
'imagenet12_val.json',
'aircraft_val.json', # for test domains.
'dtd_val.json',
'vgg-flowers_val.json',
'ucf101_val.json']

test_data = ['cifar100_test_stripped.json',
'daimlerpedcls_test_stripped.json',
'gtsrb_test_stripped.json',
'omniglot_test_stripped.json',
'svhn_test_stripped.json',
'imagenet12_test_stripped.json',
'aircraft_test_stripped.json', # for test domains.
'dtd_test_stripped.json',
'vgg-flowers_test_stripped.json',
'ucf101_test_stripped.json']

return data_folder, train_data, val_data, test_data

class BatchImageGenerator:
def __init__(self, flags, stage, file_path, metatest, b_unfold_label):

if stage not in ['train', 'val', 'test']:
assert ValueError('invalid stage!')

self.configuration(flags, stage, file_path, metatest)
self.load_data(b_unfold_label)

def configuration(self, flags, stage, file_path, metatest):
if metatest == False:
self.batch_size = flags.batch_size
if metatest == True:
self.batch_size = flags.batch_size_metatest
self.current_index = -1
self.file_path = file_path
self.stage = stage
self.shuffled = False

def load_data(self, b_unfold_label):
file_path = self.file_path
coco = COCO(file_path)
# display COCO categories and supercategories
cats = coco.loadCats(coco.getCatIds())
nms = [cat['name'] for cat in cats]
#print('COCO categories: \n{}\n'.format(' '.join(nms)))
self.num_classes = len(cats)
images = []
labels = []
for cat in cats:
catIds = coco.getCatIds(catNms=cat['name'])
# print(catIds)
imgIds = coco.getImgIds(catIds=catIds)
img = coco.loadImgs(imgIds)
labels.extend([(catIds[0] % 10000 - 1) for i in range(len(img))])
images.extend(img)
if len(images) == 0:
images = coco.dataset['images']
if b_unfold_label:
labels = unfold_label(labels=labels, classes=len(np.unique(labels)))
#assert len(images) == len(labels)
self.images = np.array(images)
self.labels = np.array(labels)
self.file_num_train = len(self.labels)
print('data num loaded:', self.file_num_train)
if self.stage is 'train':
self.images, self.labels = shuffle_data(samples=self.images, labels=self.labels)

def get_images_labels_batch(self,batch_size=None):
if batch_size is not None:
self.batch_size = batch_size
images = []
labels = []
for index in range(self.batch_size):
self.current_index += 1
# void over flow
if self.current_index > self.file_num_train - 1:
self.current_index %= self.file_num_train
self.images, self.labels = shuffle_data(samples=self.images, labels=self.labels)
#img = cv2.imread(self.images[self.current_index]['file_name'])
img = Image.open(self.images[self.current_index]['file_name'])
img = img.convert('RGB')
img = transform_train(img)
img = np.array(img)
images.append(img)
labels.append(self.labels[self.current_index])

return np.array(images), np.array(labels)

def get_image(images):
images_data = []
for img in images:
img = Image.open(img['file_name'])
img = img.convert('RGB')

img = transform_test(img)
img = np.array(img)
images_data.append(img)

return np.array(images_data)


Binary file not shown.
@@ -0,0 +1,35 @@
#!/bin/bash

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo $DIR

echo "Downloading the trained model ..."
mega-get 'https://mega.nz/#F!rRkgzawL!qoGX4bT3sif88Ho1Ke8j1Q' $DIR
echo "Done. Please verify the integrality of files"

function readfile ()
{
for file in `ls $1`
do
if [ -d $1"/"$file ]
then
readfile $1"/"$file
else
echo $1"/"$file
fi
done
}

readfile $DIR/model_output


echo "Downloading the dataset of PACS ..."
mega-get 'https://mega.nz/#F!jBllFAaI!gOXRx97YHx-zorH5wvS6uw' $DIR/data

echo "Unzipping..."

unzip $DIR/data/PACS/pacs_data.zip
echo "finishing pacs data."
unzip $DIR/data/PACS/pacs_label.zip
echo "finishing pacs label."

BIN +6 KB logs/.DS_Store
Binary file not shown.

0 comments on commit aeb5d9f

Please sign in to comment.
You can’t perform that action at this time.