Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Invalid index in scatter at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:151 #120

Open
Ehsan-Yaghoubi opened this issue Dec 4, 2019 · 3 comments
Labels
Done This issue have been resolved Good job!

Comments

@Ehsan-Yaghoubi
Copy link

Ehsan-Yaghoubi commented Dec 4, 2019

First, I should mention that the code is working for the Market1501 dataset properly.

I want to train the network on the RAP dataset. I have created a rap.py as to below. I tried to mimic the market1501.py. When the training starts, it runs to a RunTimeError. I debugged the code, and I think everything goes on exactly like market1501.py, but I don't know the reason for this error. The error comes from forward function in triplet_loss.py, where we have: targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)

I debugged and realized that the size of inputs is [8, 1295] and the size of targets is [8], (when I debug the code for market1501, these sizes are [8, 751] and [8], respectively). I should add that, batch size is 8, number of ids in RAP and market1501 are 1295 and 751 respectively. Changing the images' size and batch size do not help.

The code for rap dataset (rap.py ) is as follows:

import re
import os
from .bases import BaseImageDataset
import mat4py

class RAP (BaseImageDataset):

    dataset_dir = '/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper/RAP_resized_imgs'
    RAP_reid_mat_file_path = '/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper/rap_annotations/RAP_ReId_information.mat'

    def __init__(self, verbose=True, **kwargs):
        super(RAP, self).__init__()
        self._check_before_run()

        train, query, gallery = self._process_dir(self.RAP_reid_mat_file_path, self.dataset_dir)

        self.train = train
        self.query = query
        self.gallery = gallery

        if verbose:
            print("=> RAP is loaded")
            self.print_dataset_statistics(train, query, gallery)

        self.num_train_pids, self.num_train_imgs, self.num_train_cams = self.get_imagedata_info(self.train)
        self.num_query_pids, self.num_query_imgs, self.num_query_cams = self.get_imagedata_info(self.query)
        self.num_gallery_pids, self.num_gallery_imgs, self.num_gallery_cams = self.get_imagedata_info(self.gallery)


    def _check_before_run(self):
        """Check if all files are available before going deeper"""
        if not os.path.exists(self.dataset_dir):
            raise RuntimeError("'{}' is not available".format(self.dataset_dir))
        if not os.path.exists(self.RAP_reid_mat_file_path):
            raise RuntimeError("'{}' is not available".format(self.RAP_reid_mat_file_path))

    def _process_dir(self, mat_dir_path, data_dir):
        RAP_reid_information = mat4py.loadmat(filename=mat_dir_path)
        RAP_reid_information__ = mat4py.loadmat(filename="/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper/rap_annotations/RAP_annotation.mat")
        image_names_ = RAP_reid_information["image_name"]
        image_names__ = RAP_reid_information__["RAP_annotation"]["name"]
        image_names_star = [subsub for sub in image_names__ for subsub in sub]
        image_names = [subsub for sub in image_names_ for subsub in sub]
        pids = RAP_reid_information["person_id"]
        # cams = RAP_reid_information["person_cam"]
        # train_ids = RAP_reid_information["train_identity"]
        # test_ids = RAP_reid_information["test_identity"]
        train_indices_ = RAP_reid_information["train_index"]
        train_indices = [subsub for sub in train_indices_ for subsub in sub]
        gallery_indices_ = RAP_reid_information["test_index"]
        gallery_indices = [subsub for sub in gallery_indices_ for subsub in sub]
        query_indices = RAP_reid_information["query_index"]


        train = []
        for i, IMG_index in enumerate(train_indices):
            img_name = image_names[IMG_index-1]
            img_full_path = os.path.join(data_dir, img_name)
            person_id = pids[IMG_index-1][0]
            camera_id = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
            train.append((img_full_path, person_id , int(re.findall('\d+',camera_id)[0])))

        train_star = []
        for i, IMG_index in enumerate(train_indices):
            img_name = image_names_star[IMG_index-1]
            img_full_path = os.path.join(data_dir, img_name)
            person_id = pids[IMG_index-1][0]
            camera_id = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
            train_star.append((img_full_path, person_id , int(re.findall('\d+',camera_id)[0])))

        query = []
        for i, IMG_index in enumerate(query_indices):
            img_name = image_names[IMG_index-1]
            img_full_path = os.path.join(data_dir, img_name)
            person_id = pids[IMG_index-1][0]
            camera_id = img_name.split("-")[0]
            query.append((img_full_path, person_id , int(re.findall('\d+',camera_id)[0])))

        gallery = []
        for i, IMG_index in enumerate(gallery_indices):
            img_name = image_names[IMG_index-1]
            img_full_path = os.path.join(data_dir, img_name)
            person_id = pids[IMG_index-1][0]
            camera_id = img_name.split("-")[0]
            gallery.append((img_full_path, person_id , int(re.findall('\d+',camera_id)[0])))

        return  train, query, gallery

ERROR tracebak:

Traceback (most recent call last):
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 154, in <module>
    main()
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 150, in main
    train(cfg)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 68, in train
    start_epoch     # add for using self trained model
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/engine/trainer.py", line 208, in do_train
    trainer.run(train_loader, max_epochs=epochs)
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 326, in run
    self._handle_exception(e)
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 291, in _handle_exception
    raise e
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 313, in run
    hours, mins, secs = self._run_once_on_dataset()
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 280, in _run_once_on_dataset
    self._handle_exception(e)
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 291, in _handle_exception
    raise e
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 272, in _run_once_on_dataset
    self.state.output = self._process_function(self, batch)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/engine/trainer.py", line 47, in _update
    loss = loss_fn(score, feat, target)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/layers/__init__.py", line 35, in loss_func
    return xent(score, target) + triplet(feat, target)[0]
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/layers/triplet_loss.py", line 154, in forward
    targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
RuntimeError: Invalid index in scatter at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:151

Logs/initialization:

2019-12-04 10:01:26,619 reid_baseline INFO: Using 1 GPUS
2019-12-04 10:01:26,619 reid_baseline INFO: Namespace(config_file='../configs/softmax_triplet.yml', opts=[])
2019-12-04 10:01:26,619 reid_baseline INFO: Loaded configuration file ../configs/softmax_triplet.yml
2019-12-04 10:01:26,619 reid_baseline INFO: 
MODEL:
  PRETRAIN_CHOICE: 'imagenet'
  PRETRAIN_PATH: '/home/eshan/.torch/models/resnet50-19c8e357.pth'
  METRIC_LOSS_TYPE: 'triplet'
  IF_LABELSMOOTH: 'on'
  IF_WITH_CENTER: 'no'




INPUT:
  SIZE_TRAIN: [128, 128]  # SIZE_TRAIN: [256, 128]
  SIZE_TEST: [128, 128]   # SIZE_TEST: [256, 128]

  PROB: 0.5 # random horizontal flip
  RE_PROB: 0.5 # random erasing
  PADDING: 10

DATASETS:
  NAMES: ('RAP') # NAMES: ('market1501')

DATALOADER:
  SAMPLER: 'softmax_triplet'
  NUM_INSTANCE: 4
  NUM_WORKERS: 8

SOLVER:
  OPTIMIZER_NAME: 'Adam'
  MAX_EPOCHS: 120
  BASE_LR: 0.00035

  CLUSTER_MARGIN: 0.3

  CENTER_LR: 0.5
  CENTER_LOSS_WEIGHT: 0.0005

  RANGE_K: 2
  RANGE_MARGIN: 0.3
  RANGE_ALPHA: 0
  RANGE_BETA: 1
  RANGE_LOSS_WEIGHT: 1

  BIAS_LR_FACTOR: 1
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
  IMS_PER_BATCH: 8

  STEPS: [40, 70]
  GAMMA: 0.1

  WARMUP_FACTOR: 0.01
  WARMUP_ITERS: 10
  WARMUP_METHOD: 'linear'

  CHECKPOINT_PERIOD: 40
  LOG_PERIOD: 20
  EVAL_PERIOD: 40

TEST:
  IMS_PER_BATCH: 128
  RE_RANKING: 'no'
  WEIGHT: "path"
  NECK_FEAT: 'after'
  FEAT_NORM: 'yes'

OUTPUT_DIR: "/media/ehsan/HDD2TB/PersonReIdentification/reid-strong-baseline/reid_on_RAP_RESULTS"


2019-12-04 10:01:26,619 reid_baseline INFO: Running with config:
DATALOADER:
  NUM_INSTANCE: 4
  NUM_WORKERS: 8
  SAMPLER: softmax_triplet
DATASETS:
  NAMES: RAP
  ROOT_DIR: /media/ehsan/HDD2TB/PersonReIdentification/DATASET_Person_Reidentification
INPUT:
  PADDING: 10
  PIXEL_MEAN: [0.485, 0.456, 0.406]
  PIXEL_STD: [0.229, 0.224, 0.225]
  PROB: 0.5
  RE_PROB: 0.5
  SIZE_TEST: [128, 128]
  SIZE_TRAIN: [128, 128]
MODEL:
  DEVICE: cuda
  DEVICE_ID: 0
  IF_LABELSMOOTH: on
  IF_WITH_CENTER: no
  LAST_STRIDE: 1
  METRIC_LOSS_TYPE: triplet
  NAME: resnet50
  NECK: bnneck
  PRETRAIN_CHOICE: imagenet
  PRETRAIN_PATH: /home/eshan/.torch/models/resnet50-19c8e357.pth
OUTPUT_DIR: /media/ehsan/HDD2TB/PersonReIdentification/reid-strong-baseline/reid_on_RAP_RESULTS
SOLVER:
  BASE_LR: 0.00035
  BIAS_LR_FACTOR: 1
  CENTER_LOSS_WEIGHT: 0.0005
  CENTER_LR: 0.5
  CHECKPOINT_PERIOD: 40
  CLUSTER_MARGIN: 0.3
  EVAL_PERIOD: 40
  GAMMA: 0.1
  IMS_PER_BATCH: 8
  LOG_PERIOD: 20
  MARGIN: 0.3
  MAX_EPOCHS: 120
  MOMENTUM: 0.9
  OPTIMIZER_NAME: Adam
  RANGE_ALPHA: 0
  RANGE_BETA: 1
  RANGE_K: 2
  RANGE_LOSS_WEIGHT: 1
  RANGE_MARGIN: 0.3
  STEPS: (40, 70)
  WARMUP_FACTOR: 0.01
  WARMUP_ITERS: 10
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
TEST:
  FEAT_NORM: yes
  IMS_PER_BATCH: 128
  NECK_FEAT: after
  RE_RANKING: no
  WEIGHT: path
=> RAP is loaded
Dataset statistics:
  ----------------------------------------
  subset   | # ids | # images | # cameras
  ----------------------------------------
  train    |  1295 |    13178 |        23
  query    |  1294 |     7202 |        23
  gallery  |  1295 |    28407 |        23
  ----------------------------------------
Loading pretrained ImageNet model......
Train without center loss, the loss type is triplet
label smooth on, numclasses: 1295
2019-12-04 10:01:32,222 reid_baseline.train INFO: Start training

Traceback (most recent call last):
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 154, in <module>
    main()
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 150, in main
    train(cfg)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 68, in train
    start_epoch     # add for using self trained model
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/engine/trainer.py", line 208, in do_train
    trainer.run(train_loader, max_epochs=epochs)
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 326, in run
    self._handle_exception(e)
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 291, in _handle_exception
    raise e
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 313, in run
    hours, mins, secs = self._run_once_on_dataset()
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 280, in _run_once_on_dataset
    self._handle_exception(e)
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 291, in _handle_exception
    raise e
  File "/usr/local/lib/python3.5/dist-packages/ignite/engine/engine.py", line 272, in _run_once_on_dataset
    self.state.output = self._process_function(self, batch)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/engine/trainer.py", line 47, in _update
    loss = loss_fn(score, feat, target)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/layers/__init__.py", line 35, in loss_func
    return xent(score, target) + triplet(feat, target)[0]
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/layers/triplet_loss.py", line 154, in forward
    targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
RuntimeError: Invalid index in scatter at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:151

POSSIBLE PROBLEM:

  1. Maybe the problem is that somewhere in the code targets are defined to be 751 but the scatter sees 1295 size array. I am a newbie in PyTorch, could you please guide me through it?
@michuanhaohao
Copy link
Owner

You should write an RAP.py by yourself and confirm that self.num_train_pids == 1295. The training set of Market1501 has 751 IDs, so the self.num_train_pids == 751 in Market1501.

@Ehsan-Yaghoubi
Copy link
Author

Ehsan-Yaghoubi commented Dec 5, 2019

Thank you for your reply. I have already written a script for RAP dataset.

The same problem is asked in these links but there is no solution for it:
https://www.gitmemory.com/issue/KaiyangZhou/deep-person-reid/190/502672175
NVlabs/SPADE#68
https://discuss.pytorch.org/t/invalid-index-at-scatter/30805

Here is the code that I have developed with extra assertions that you suggest. I have brought the logs also. As you can see, the number of features in the last layer is 1295 which is compatible with the number of train IDs. I have resized the images to 128*64 also but it didn't help. I have selected only 751 IDs to see maybe the problem is solved, but it didn't help also.

Any suggestion?
Is the problem with forward function in triplet_loss.py ?
I debugged forward function in triplet_loss.py and recieved these shapes:

 inputs.shape is : torch.Size([64, 1295])
 targets.shape is: torch.Size([64])
 log_prob_size od inputs is:   torch.Size([64, 1295])
 unsqueezed targets.shape is: torch.Size([64, 1])

import re
import os
from .bases import BaseImageDataset
import mat4py


class RAP (BaseImageDataset):

    dataset_dir = '/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper/RAP_resized_imgs'
    RAP_reid_mat_file_path = '/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper/rap_annotations/RAP_ReId_information.mat'

    def __init__(self, verbose=True, **kwargs):
        super(RAP, self).__init__()
        self._check_before_run()

        train, query, gallery = self._process_dir(self.RAP_reid_mat_file_path, self.dataset_dir)

        self.train = train
        self.query = query
        self.gallery = gallery


        if verbose:
            print("=> RAP is loaded")
            self.print_dataset_statistics(train, query, gallery)

        self.num_train_pids, self.num_train_imgs, self.num_train_cams = self.get_imagedata_info(self.train)
        self.num_query_pids, self.num_query_imgs, self.num_query_cams = self.get_imagedata_info(self.query)
        self.num_gallery_pids, self.num_gallery_imgs, self.num_gallery_cams = self.get_imagedata_info(self.gallery)

        assert self.num_train_pids == 1295


    def _check_before_run(self):
        """Check if all files are available before going deeper"""
        if not os.path.exists(self.dataset_dir):
            raise RuntimeError("'{}' is not available".format(self.dataset_dir))
        if not os.path.exists(self.RAP_reid_mat_file_path):
            raise RuntimeError("'{}' is not available".format(self.RAP_reid_mat_file_path))

    def _process_dir(self, mat_dir_path, data_dir):
        RAP_reid_information = mat4py.loadmat(filename=mat_dir_path)
        RAP_reid_information__ = mat4py.loadmat(filename="/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper/rap_annotations/RAP_annotation.mat")
        image_names_ = RAP_reid_information["image_name"]
        image_names__ = RAP_reid_information__["RAP_annotation"]["name"]
        image_names_star = [subsub for sub in image_names__ for subsub in sub]
        image_names = [subsub for sub in image_names_ for subsub in sub]
        pids = RAP_reid_information["person_id"]
        # cams = RAP_reid_information["person_cam"]
        # train_ids = RAP_reid_information["train_identity"]
        # test_ids = RAP_reid_information["test_identity"]
        train_indices_ = RAP_reid_information["train_index"]
        train_indices = [subsub for sub in train_indices_ for subsub in sub]
        gallery_indices_ = RAP_reid_information["test_index"]
        gallery_indices = [subsub for sub in gallery_indices_ for subsub in sub]
        query_indices = RAP_reid_information["query_index"]



        train = []
        ids = []
        cam_ids = []
        for i1, IMG_index1 in enumerate(train_indices):
                person_id = pids[IMG_index1-1][0]
                if person_id == -1: continue  # junk images are just ignored
                img_name = image_names[IMG_index1-1]
                img_full_path = os.path.join(data_dir, img_name)
                camera = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
                camera_id = int(re.findall('\d+',camera)[0])
                person_id -= 1 # index should start from 0
                camera_id -= 1 # index should start from 0
                assert 0 <= person_id <= 2588  # There are 2589 person identities in RAP dataset
                assert 0 <= camera_id <= 30 # There are 23 cameras with labels between 1 to 31
                cam_ids.append(camera_id)
                ids.append(person_id)
                train.append((img_full_path, person_id , camera_id))
        print(">> Number of train Images: ", len(set(train)))
        print(">> Number of train IDs: ",len(set(ids)))
        print(">> Number of train Camera IDs: ",len(set(cam_ids)))

        query= []
        ids = []
        cam_ids = []
        for i2, IMG_index2 in enumerate(query_indices):
                person_id = pids[IMG_index2-1][0]
                if person_id == -1: continue  # junk images are just ignored
                img_name = image_names[IMG_index2-1]
                img_full_path = os.path.join(data_dir, img_name)
                camera = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
                camera_id = int(re.findall('\d+',camera)[0])
                person_id -= 1 # index should start from 0
                camera_id -= 1 # index should start from 0
                assert 0 <= person_id <= 2588  # There are 2589 person identities in RAP dataset
                assert 0 <= camera_id <= 30 # There are 23 cameras with labels between 1 to 31
                cam_ids.append(camera_id)
                ids.append(person_id)
                query.append((img_full_path, person_id , camera_id))
        print(">>>> Number of query Images: ",len(set(query)))
        print(">>>> Number of query IDs: ",len(set(ids)))
        print(">>>> Number of query Camera IDs: ",len(set(cam_ids)))

        gallery= []
        ids = []
        cam_ids = []
        for i3, IMG_index3 in enumerate(gallery_indices):
                person_id = pids[IMG_index3-1][0]
                if person_id == -1: continue  # junk images are just ignored
                img_name = image_names[IMG_index3-1]
                img_full_path = os.path.join(data_dir, img_name)
                camera = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
                camera_id = int(re.findall('\d+',camera)[0])
                person_id -= 1 # index should start from 0
                camera_id -= 1 # index should start from 0
                assert 0 <= person_id <= 2588  # There are 2589 person identities in RAP dataset
                assert 0 <= camera_id <= 30 # There are 23 cameras with labels between 1 to 31
                cam_ids.append(camera_id)
                ids.append(person_id)
                gallery.append((img_full_path, person_id , camera_id))
        print(">>>>>> Number of gallery Images: ",len(set(gallery)))
        print(">>>>>> Number of gallery IDs: ",len(set(ids)))
        print(">>>>>> Number of gallery Camera IDs: ",len(set(cam_ids)))

        return  train, query, gallery


The logs are as follows:

2019-12-05 12:29:22,158 reid_baseline INFO: Using 1 GPUS
2019-12-05 12:29:22,158 reid_baseline INFO: Namespace(config_file='../configs/softmax_triplet.yml', opts=[])
2019-12-05 12:29:22,158 reid_baseline INFO: Loaded configuration file ../configs/softmax_triplet.yml
2019-12-05 12:29:22,158 reid_baseline INFO: 
MODEL:
  PRETRAIN_CHOICE: 'imagenet'
  PRETRAIN_PATH: '/home/eshan/.torch/models/resnet50-19c8e357.pth'
  METRIC_LOSS_TYPE: 'triplet'
  IF_LABELSMOOTH: 'on'
  IF_WITH_CENTER: 'no'




INPUT:
  SIZE_TRAIN: [256, 128]
  SIZE_TEST: [256, 128]
  PROB: 0.5 # random horizontal flip
  RE_PROB: 0.5 # random erasing
  PADDING: 10

DATASETS:
  NAMES: ('RAP')

DATALOADER:
  SAMPLER: 'softmax_triplet'
  NUM_INSTANCE: 4
  NUM_WORKERS: 8

SOLVER:
  OPTIMIZER_NAME: 'Adam'
  MAX_EPOCHS: 120
  BASE_LR: 0.00035

  CLUSTER_MARGIN: 0.3

  CENTER_LR: 0.5
  CENTER_LOSS_WEIGHT: 0.0005

  RANGE_K: 2
  RANGE_MARGIN: 0.3
  RANGE_ALPHA: 0
  RANGE_BETA: 1
  RANGE_LOSS_WEIGHT: 1

  BIAS_LR_FACTOR: 1
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
  IMS_PER_BATCH: 64

  STEPS: [40, 70]
  GAMMA: 0.1

  WARMUP_FACTOR: 0.01
  WARMUP_ITERS: 10
  WARMUP_METHOD: 'linear'

  CHECKPOINT_PERIOD: 40
  LOG_PERIOD: 20
  EVAL_PERIOD: 40

TEST:
  IMS_PER_BATCH: 128
  RE_RANKING: 'no'
  WEIGHT: "path"
  NECK_FEAT: 'after'
  FEAT_NORM: 'yes'

OUTPUT_DIR: "/media/ehsan/HDD2TB/PersonReIdentification/reid-strong-baseline/reid_on_RAP_RESULTS"


2019-12-05 12:29:22,159 reid_baseline INFO: Running with config:
DATALOADER:
  NUM_INSTANCE: 4
  NUM_WORKERS: 8
  SAMPLER: softmax_triplet
DATASETS:
  NAMES: RAP
  ROOT_DIR: /media/ehsan/HDD2TB/PersonReIdentification/DATASET_Person_Reidentification/RAP/RAP128x64
INPUT:
  PADDING: 10
  PIXEL_MEAN: [0.485, 0.456, 0.406]
  PIXEL_STD: [0.229, 0.224, 0.225]
  PROB: 0.5
  RE_PROB: 0.5
  SIZE_TEST: [256, 128]
  SIZE_TRAIN: [256, 128]
MODEL:
  DEVICE: cuda
  DEVICE_ID: 0
  IF_LABELSMOOTH: on
  IF_WITH_CENTER: no
  LAST_STRIDE: 1
  METRIC_LOSS_TYPE: triplet
  NAME: resnet50
  NECK: bnneck
  PRETRAIN_CHOICE: imagenet
  PRETRAIN_PATH: /home/eshan/.torch/models/resnet50-19c8e357.pth
OUTPUT_DIR: /media/ehsan/HDD2TB/PersonReIdentification/reid-strong-baseline/reid_on_RAP_RESULTS
SOLVER:
  BASE_LR: 0.00035
  BIAS_LR_FACTOR: 1
  CENTER_LOSS_WEIGHT: 0.0005
  CENTER_LR: 0.5
  CHECKPOINT_PERIOD: 40
  CLUSTER_MARGIN: 0.3
  EVAL_PERIOD: 40
  GAMMA: 0.1
  IMS_PER_BATCH: 64
  LOG_PERIOD: 20
  MARGIN: 0.3
  MAX_EPOCHS: 120
  MOMENTUM: 0.9
  OPTIMIZER_NAME: Adam
  RANGE_ALPHA: 0
  RANGE_BETA: 1
  RANGE_K: 2
  RANGE_LOSS_WEIGHT: 1
  RANGE_MARGIN: 0.3
  STEPS: (40, 70)
  WARMUP_FACTOR: 0.01
  WARMUP_ITERS: 10
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
TEST:
  FEAT_NORM: yes
  IMS_PER_BATCH: 128
  NECK_FEAT: after
  RE_RANKING: no
  WEIGHT: path
>> Number of train Images:  13178
>> Number of train IDs:  1295
>> Number of train Camera IDs:  23
>>>> Number of query Images:  7202
>>>> Number of query IDs:  1294
>>>> Number of query Camera IDs:  23
>>>>>> Number of gallery Images:  13460
>>>>>> Number of gallery IDs:  1294
>>>>>> Number of gallery Camera IDs:  23
=> RAP is loaded
Dataset statistics:
  ----------------------------------------
  subset   | # ids | # images | # cameras
  ----------------------------------------
  train    |  1295 |    13178 |        23
  query    |  1294 |     7202 |        23
  gallery  |  1294 |    13460 |        23
  ----------------------------------------
Loading pretrained ImageNet model......
Baseline(
  (base): ResNet(
    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (2): Bottleneck(
        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
    )
    (layer2): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (2): Bottleneck(
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (3): Bottleneck(
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
    )
    (layer3): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (2): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (3): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (4): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (5): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
    )
    (layer4): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (2): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
    )
  )
  (gap): AdaptiveAvgPool2d(output_size=1)
  (bottleneck): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (classifier): Linear(in_features=2048, out_features=1295, bias=False)
)
Train without center loss, the loss type is triplet
>>>
	num_classes:  1295
label smooth on, numclasses: 1295
2019-12-05 12:29:27,672 reid_baseline.train INFO: Start training
/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
torch.Size([64, 1295])
torch.Size([64])
Traceback (most recent call last):
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 156, in <module>
    main()
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 152, in main
    train(cfg)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/tools/train.py", line 68, in train
    start_epoch     # add for using self trained model
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/engine/trainer.py", line 208, in do_train
    trainer.run(train_loader, max_epochs=epochs)
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/ignite/engine/engine.py", line 326, in run
    self._handle_exception(e)
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/ignite/engine/engine.py", line 291, in _handle_exception
    raise e
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/ignite/engine/engine.py", line 313, in run
    hours, mins, secs = self._run_once_on_dataset()
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/ignite/engine/engine.py", line 280, in _run_once_on_dataset
    self._handle_exception(e)
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/ignite/engine/engine.py", line 291, in _handle_exception
    raise e
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/ignite/engine/engine.py", line 272, in _run_once_on_dataset
    self.state.output = self._process_function(self, batch)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/engine/trainer.py", line 47, in _update
    loss = loss_fn(score, feat, target)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/layers/__init__.py", line 35, in loss_func
    return xent(score, target) + triplet(feat, target)[0]
  File "/home/eshan/.virtualenvs/baseline_person_reidentification/lib/python3.5/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/layers/triplet_loss.py", line 155, in forward
    targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
RuntimeError: Invalid index in scatter at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:564

Process finished with exit code 1



@Ehsan-Yaghoubi
Copy link
Author

When I reordered the IDs from zero, the error disappeared.
I changed the above code as follows:

import re
import os
from .bases import BaseImageDataset
import mat4py
from RAP_script.rap_data_loading import load_reid_data
from RAP_script.rap_data_loading import load_rap_dataset
from config import cfg

class RAP (BaseImageDataset):

    dataset_dir = 'RAP_resized_imgs256x256'
    rap_mat_file = 'Anchor_level_rap/rap_annotations/RAP_annotation.mat'
    rap_json_data = "/media/ehsan/48BE4782BE476810/AA_GITHUP/forked_reid_baseline/RAP_script/rap_data.json"

    def __init__(self, root='/media/ehsan/48BE4782BE476810/AA_GITHUP/Anchor_Level_Paper', verbose=True, **kwargs):
        super(RAP, self).__init__()
        self.dataset_dir = os.path.join(root, self.dataset_dir)
        self.rap_mat_file = os.path.join(root, self.rap_mat_file)
        self._check_before_run()

        train, query, gallery, rap_data = self._process_dir(self.rap_mat_file, self.dataset_dir, relabel= True)

        self.train = train
        self.query = query
        self.gallery = gallery
        self.rap_data = rap_data


        if verbose:
            print("=> RAP is loaded")
            self.print_dataset_statistics(train, query, gallery)

        self.num_train_pids, self.num_train_imgs, self.num_train_cams = self.get_imagedata_info(self.train)
        self.num_query_pids, self.num_query_imgs, self.num_query_cams = self.get_imagedata_info(self.query)
        self.num_gallery_pids, self.num_gallery_imgs, self.num_gallery_cams = self.get_imagedata_info(self.gallery)

        #assert self.num_train_pids == 1295


    def _check_before_run(self):
        """Check if all files are available before going deeper"""
        if not os.path.exists(self.dataset_dir):
            raise RuntimeError("'{}' is not available".format(self.dataset_dir))
        if not os.path.exists(self.rap_mat_file):
            raise RuntimeError("'{}' is not available".format(self.rap_mat_file))


    def _process_dir(self, rap_mat_file, data_dir, relabel):
        image_names, pids, train_indices, gallery_indices, query_indices=load_reid_data(rap_mat_file)
        rap_data = load_rap_dataset(rap_attributes_filepath=self.rap_mat_file, rap_keypoints_json=self.rap_json_data, load_from_file=True)
        pid_container = set()
        for index, IMG_index1 in enumerate(train_indices):
            person_id = pids[IMG_index1-1]
            if person_id == -1: continue  # junk images are just ignored
            pid_container.add(person_id)
        pid2label = {pid: label for label, pid in enumerate(pid_container)}

        train = []
        ids = []
        cam_ids = []
        flag = 0
        for index, IMG_index1 in enumerate(train_indices):
            try:
                person_id = pids[IMG_index1-1]
                if person_id == -1: continue  # junk images are just ignored
                img_name = image_names[IMG_index1-1]
                img_labels = rap_data[img_name]['attrs']
                img_full_path = os.path.join(data_dir, img_name)
                camera = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
                camera_id = int(re.findall('\d+',camera)[0])
                assert 1 <= person_id <= 2589  # There are 2589 person identities in RAP dataset
                assert 1 <= camera_id <= 31 # There are 23 cameras with labels between 1 to 31
                cam_ids.append(camera_id)
                ids.append(person_id)
                if relabel: person_id = pid2label[person_id] # train ids must be relabelled from zero
                train.append((img_full_path, person_id, camera_id, img_labels))
            except KeyError:
                flag += 1
                #print(" Warning at Train set {}: information of below image is not found in rap_data.\n{}".format(flag, img_name))
        print(" Warning at Train set: information of {} images are not found in rap_data".format(flag))
        # print(">> Number of train Images: ", len(set(train)))
        # print(">> Number of train IDs: ",len(set(ids)))
        # print(">> Number of train Camera IDs: ",len(set(cam_ids)))

        query= []
        ids = []
        cam_ids = []
        flag = 0
        for i2, IMG_index2 in enumerate(query_indices):
            try:
                person_id = pids[IMG_index2-1]
                if person_id == -1: continue  # junk images are just ignored
                img_name = image_names[IMG_index2-1]
                img_labels = rap_data[img_name]['attrs']
                img_full_path = os.path.join(data_dir, img_name)
                camera = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
                camera_id = int(re.findall('\d+',camera)[0])
                assert 1 <= person_id <= 2589  # There are 2589 person identities in RAP dataset
                assert 1 <= camera_id <= 31 # There are 23 cameras with labels between 1 to 31
                cam_ids.append(camera_id)
                ids.append(person_id)
                query.append((img_full_path, person_id , camera_id, img_labels))
            except KeyError:
                flag += 1
                #print(" Warning at Query set {}: information of below image is not found in rap_data.\n{}".format(flag, img_name))
        print(" Warning at Train set: information of {} images are not found in rap_data".format(flag))
        # print(">>>> Number of query Images: ",len(set(query)))
        # print(">>>> Number of query IDs: ",len(set(ids)))
        # print(">>>> Number of query Camera IDs: ",len(set(cam_ids)))

        gallery= []
        ids = []
        cam_ids = []
        flag = 0
        for i3, IMG_index3 in enumerate(gallery_indices):
            try:
                person_id = pids[IMG_index3-1]
                if person_id == -1: continue  # junk images are just ignored
                img_name = image_names[IMG_index3-1]
                img_labels = rap_data[img_name]['attrs']
                img_full_path = os.path.join(data_dir, img_name)
                camera = img_name.split("-")[0] # e.g. ['CAM01', '2014', '02', '15', '20140215161032', '20140215162620', 'tarid0', 'frame218', 'line1.png']
                camera_id = int(re.findall('\d+',camera)[0])
                assert 1 <= person_id <= 2589  # There are 2589 person identities in RAP dataset
                assert 1 <= camera_id <= 31 # There are 23 cameras with labels between 1 to 31
                cam_ids.append(camera_id)
                ids.append(person_id)
                gallery.append((img_full_path, person_id , camera_id, img_labels))
            except KeyError:
                flag += 1
                #print(" Warning at Gallery set {}: information of below image is not found in rap_data.\n{}".format(flag, img_name))
        print(" Warning at Train set: information of {} images are not found in rap_data".format(flag))
        # print(">>>>>> Number of gallery Images: ",len(set(gallery)))
        # print(">>>>>> Number of gallery IDs: ",len(set(ids)))
        # print(">>>>>> Number of gallery Camera IDs: ",len(set(cam_ids)))

        return  train, query, gallery, rap_data

@michuanhaohao michuanhaohao added Done This issue have been resolved Good job! labels May 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Done This issue have been resolved Good job!
Projects
None yet
Development

No branches or pull requests

2 participants