Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading Data for Tracking #113

Open
nikeshdevkota opened this issue Nov 10, 2023 · 7 comments
Open

Loading Data for Tracking #113

nikeshdevkota opened this issue Nov 10, 2023 · 7 comments

Comments

@nikeshdevkota
Copy link

nikeshdevkota commented Nov 10, 2023

Does anyone have any idea about loading a custom dataset for tracking scenarios?

I managed to load a dataset for the detection scenario by modifying "python src/generate_coco_from_mot.py," but I am not sure how to load a sequential dataset for tracking.

I just have one sequential set of data for training and one for testing. The training dataset is further divided into two other sequential datasets using the

     generate_coco_from_infrared(
    'custom_data_cross_val_coco',
    seqs_names=['abc'],
    frame_range={'start': 0, 'end': 0.3})

   generate_coco_from_infrared(
    'custom_data_train_coco',
    seqs_names=['abc'],
    frame_range={'start': 0.3, 'end': 1})

How do I perform tracking for the cross-validation dataset in this scenario?

The dataset folder format is like this

|-- custom_dataset
|   |-- train
|   |   | custom_data
|   |   |  | gt
|   |   |  |  | gt.txt
|   |   |  | image
|   |   |  | seqinfo.ini
|   |-- test
|   |   | custom_data
|   |   |  | gt
|   |   |  |  | gt.txt
|   |   |  | image
|   |   |  | seqinfo.ini
|   |-- annotations
|   |   |-- custom_data_train.json
|   |   |-- custom_data_cross_val.json
|   | -- custom_data_train
|   |   | -- .jpg *
|   | -- custom_data_cross_val
|   |   | -- .jpg *
@insookim43
Copy link

Hi! I am working on this too. I think I am going to execute track.py on total train set will do that tracking like in Demo, though I have to generate annotation for the whole train data again.

@nikeshdevkota
Copy link
Author

I managed to load val_dataset for tracking during training and then do tracking for test data separately. If you generated COCO annotations from "src/generate_coco_from_mot17.py," for the whole training data and split it further into training and validation data, you don't need to create another annotation for tracking on validation data provided they are sequential.

@insookim43
Copy link

insookim43 commented Nov 17, 2023

You did mean, tracking(or testing) inside the training phase. I thought that you wanted to use the trained model, after finished the whole training. Like after 20 epochs.

Glad to hear yours went well, bc I am facing some error in validation step. :( (opened up another issue)
I also generated train/val set using the logic from src/generating_coco_from_mot17.py.
But I used different sequence in train and validation dataset.(like train_split contains {seqA, seqB, seqC} and val_split contains {seqD, seqE}.
I am going to try split same sequence into train and validation data like in yours.
But what confuses me is that I should be able to use different sequence in train and validation dataset too.
Thanks to you now I think I am going to look into generate_coco_from_mot17.py.
I must have lost some detail when generating coco from custom data.

@nikeshdevkota
Copy link
Author

I tracked separate cross-validation data during the training phase. After the whole training was completed, I used the optimal MOTA model for tracking separate test data.
If your original train data contains {seqA, seqB, seqC,seqD, seqE}, you can do testing for only val_split {seqD, seqE} separately during training as well. But you need to change the code in factory.py, mot17_wrapper.py, and mot17_sequence.py inside the dataset/tracking folder.
Regarding the issue you opened, your error comes from line 59 from factory.py because of the dataset name you have mentioned in track.yaml is not in the DATASET[name]. While loading the dataset for tracking, check whether the folder structure you mentioned is correct or not.

In the original code, the dataset_name = 'MOT17-ALL-ALL' in track.yaml is found inside the DATASET[name], but for yours, the folder structure is not the same.

Also, you need to change how the self._data is loaded during training and testing. You don't want to concatenate the validation data and testing data. For this, add a separate argument in TrackFactoryDataset. Something like the following

            if mode == 'train':
                # Load validation data during training
                self._data = DATASETS[cross_validation_data]( **kwargs)
            elif mode == 'test':
                # Load test data during testing
                self._data = DATASETS[test_data]**kwargs)
            else:
               raise ValueError(f"[!] Dataset not found: {datasets}")


Factory Error 1
Factory Error

@insookim43
Copy link

Thank you so much for the amazing advice.
As you said, I have to make several changes all over the tracking processes.
(Indeed I have been trying to pass in "dataset_name: MOT17-ALL-ALL" when tracking custom data.)
And yes, changes should also be made to files in charge of wrapping data sequences to make Tracking dataset. (factory.py, mot17_wrapper.py, and mot17_sequence.py.,) I think most work have to be done on mot17_sequence,py, so starting off from there.

@nikeshdevkota
Copy link
Author

@insookim43 did you change the code and evaluated the test data?

@insookim43
Copy link

@insookim43 did you change the code and evaluated the test data?

Yes, I changed factory.py. I made DATASET[name] return CUSTOMDATA_Wrapper Instead of returning using Mot17_wrapper.
In CUSTOMDATA_Wrapper I don't use “det” because I don’t use public detection. I have rearranged data folder structure too.
dict DATASET now contains customdata wrapper like this.

for split in ['TRAIN', 'TEST', 'flir_adas_v2',
              'video-BzZspxAweF8AnKhWK', 
              'video-FkqCGijjAKpABetZZ', 
              'video-PGdt7pJChnKoJDt35', 
              'video-RMxN6a4CcCeLGu4tA', 
              'video-YnfPeH8i2uBWmsSd2', 
              'video-dvZBYnphN2BwdMKBc', 
              'video-hnbGXq3nNPjBbc7CL', 
              'video-msNEBxJE5PPDqenBM']:
    name = f"{split}"
    DATASETS[name] = (
        lambda kwargs, split=split: FLIR_ADAS_V2_Wrapper(split, dets=None, **kwargs))

And I made customdata_wrapper.py that works without dets, customdata_sequence.py, It’s largely based on MOT17Wrapper.
I didn't know how to not use 'det' option in original MOT17Wrapper, so I just reimplemented the wrapper without that part.

Regarding handling how the self._data is loaded during training and testing, It's not done yet. I'll be looking into this just after handling the error from another part of the model.

My model consumes a lot of memory and just had stopped after 20 epochs, but the model have been trained.
Also evaluating my custom data with Track.py have some trivial? error, but also worked..
Just returned from holiday, sorry for late reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants