Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about train on DepthTrack dataset #15

Open
TonikLeung opened this issue Feb 23, 2023 · 6 comments
Open

Question about train on DepthTrack dataset #15

TonikLeung opened this issue Feb 23, 2023 · 6 comments

Comments

@TonikLeung
Copy link

Hello, I tried to train the DeT DiMP50 Mean on DepthTrack dataset, I set the path and selected a few sequences to testing the training. When I ran “python run_training.py dimp DeT_DiMP50_Mean”, the error raised.

I tracked the error, it was in ltr/trainers/base_trainers.py:

Restarting training from last epoch ...
/home/cat/ljt/DeT/checkpoints/ltr/dimp/DeT_DiMP50_Mean/DiMPnet_DeT_ep*.pth.tar
Training crashed at epoch 51
Traceback for the error!
Traceback (most recent call last):
File "../ltr/trainers/base_trainer.py", line 70, in train
self.train_epoch()
File "../ltr/trainers/ltr_trainer.py", line 80, in train_epoch
self.cycle_dataset(loader)
File "../ltr/trainers/ltr_trainer.py", line 52, in cycle_dataset
for i, data in enumerate(loader, 1):
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "pandas/_libs/parsers.pyx", line 1095, in pandas._libs.parsers.TextReader._convert_tokens
TypeError: Cannot cast array data from dtype('O') to dtype('float32') according to the rule 'safe'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "../ltr/data/sampler.py", line 108, in getitem
seq_info_dict = dataset.get_sequence_info(seq_id)
File "../ltr/dataset/depthtrack.py", line 125, in get_sequence_info
bbox = self._read_bb_anno(depth_path)
File "../ltr/dataset/depthtrack.py", line 95, in _read_bb_anno
gt = pandas.read_csv(bb_anno_file, delimiter=',', header=None, dtype=np.float32, na_filter=False, low_memory=False).values
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
return _read(filepath_or_buffer, kwds)
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read
return parser.read(nrows)
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read
index, columns, col_dict = self._engine.read(nrows)
File "/home/cat/anaconda3/envs/DeT/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 229, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 880, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 1026, in pandas._libs.parsers.TextReader._convert_column_data
File "pandas/_libs/parsers.pyx", line 1103, in pandas._libs.parsers.TextReader._convert_tokens
ValueError: cannot safely convert passed user dtype of float32 for object dtyped data in column 0

@Yifanpan1
Copy link

I meet the same problem,Have you solved it?

@TonikLeung
Copy link
Author

Haven't yet..

@JiaLianjie
Copy link

I meet the same problem,Have you solved it?

Haven't yet..

I meet the problem too, have you solved it? And can you share your pandas version?

@JiaLianjie
Copy link

I meet the same problem,Have you solved it?

Haven't yet..

I have solved it! The matter is caused by a parameter of function "pandas.read_csv()" in file "DeT-main/ltr/dataset/depthtrack.py".
Here is the explanation of three parameters in the document:

na_values: scalar, str, list-like, or dict, optional
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘None’, ‘n/a’, ‘nan’, ‘null’.

keep_default_na: bool, default True
Whether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows:

  • If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing.
  • If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing.
  • If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing.
  • If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.

Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.

na_filter: bool, default True
Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.

In depthtrack.py, we can find "na_values=False" in line 95, which blocked the other two parameters so that "read_csv" can not identify "nan" in "groundtruth.txt" as NA values. We just switch it to "True" and it works.

@MOCUISHLE-AC
Copy link

Hello, I try to run DeT_ATOM_Max. But I meet some problems when the File "det/ltr/dataset/depthtrack.py" runs 'frame_list = [self._get_frame(depth_path, f_id) for ii, f_id in enumerate(frame_ids)]'(in line 185). It looks like there are some problems in DepthTrack dataset(toy07_indoor_320)

[train: 1, 1 / 1000] FPS: 7.4 (7.4) , Loss/total: 1.04415 , Loss/iou: 1.04415
[train: 1, 2 / 1000] FPS: 13.2 (59.7) , Loss/total: 0.95544 , Loss/iou: 0.95544
[train: 1, 3 / 1000] FPS: 14.5 (18.3) , Loss/total: 1.01399 , Loss/iou: 1.01399
[train: 1, 4 / 1000] FPS: 15.5 (19.0) , Loss/total: 1.00854 , Loss/iou: 1.00854
[train: 1, 5 / 1000] FPS: 16.1 (19.1) , Loss/total: 0.97844 , Loss/iou: 0.97844
[train: 1, 6 / 1000] FPS: 16.5 (19.2) , Loss/total: 0.88426 , Loss/iou: 0.88426
[train: 1, 7 / 1000] FPS: 16.9 (19.3) , Loss/total: 0.82712 , Loss/iou: 0.82712
[train: 1, 8 / 1000] FPS: 16.8 (16.3) , Loss/total: 0.78724 , Loss/iou: 0.78724
[train: 1, 9 / 1000] FPS: 16.9 (18.0) , Loss/total: 0.75835 , Loss/iou: 0.75835
[train: 1, 10 / 1000] FPS: 17.1 (19.2) , Loss/total: 0.73059 , Loss/iou: 0.73059
[train: 1, 11 / 1000] FPS: 17.2 (18.2) , Loss/total: 0.71331 , Loss/iou: 0.71331
[train: 1, 12 / 1000] FPS: 17.4 (19.8) , Loss/total: 0.69950 , Loss/iou: 0.69950
[train: 1, 13 / 1000] FPS: 17.5 (18.9) , Loss/total: 0.67931 , Loss/iou: 0.67931
[train: 1, 14 / 1000] FPS: 17.6 (18.6) , Loss/total: 0.65817 , Loss/iou: 0.65817
[train: 1, 15 / 1000] FPS: 17.6 (18.0) , Loss/total: 0.64252 , Loss/iou: 0.64252
[train: 1, 16 / 1000] FPS: 17.7 (19.6) , Loss/total: 0.62998 , Loss/iou: 0.62998
[train: 1, 17 / 1000] FPS: 17.8 (19.2) , Loss/total: 0.61434 , Loss/iou: 0.61434
[train: 1, 18 / 1000] FPS: 17.8 (18.4) , Loss/total: 0.59853 , Loss/iou: 0.59853
[train: 1, 19 / 1000] FPS: 17.9 (19.3) , Loss/total: 0.58180 , Loss/iou: 0.58180
[train: 1, 20 / 1000] FPS: 18.0 (19.0) , Loss/total: 0.56429 , Loss/iou: 0.56429
[ WARN:0@73.814] global loadsave.cpp:248 findDecoder imread_('/root/siton-gpfs-archive/leiheao/dataset/DepthTrack/train/toy07_indoor_320/color/00001385.jpg'): can't open/read file: check file path/integrity
[train: 1, 21 / 1000] FPS: 18.0 (18.8) , Loss/total: 0.55770 , Loss/iou: 0.55770
Training crashed at epoch 1
Traceback for the error!
Traceback (most recent call last):
File "/root/siton-gpfs-archive/leiheao/det/ltr/trainers/base_trainer.py", line 70, in train
self.train_epoch()
File "/root/siton-gpfs-archive/leiheao/det/ltr/trainers/ltr_trainer.py", line 82, in train_epoch
self.cycle_dataset(loader)
File "/root/siton-gpfs-archive/leiheao/det/ltr/trainers/ltr_trainer.py", line 53, in cycle_dataset
for i, data in enumerate(loader, 1):
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 530, in next
data = self._next_data()
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
data.reraise()
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/_utils.py", line 457, in reraise
raise exception
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/siton-gpfs-archive/leiheao/det/ltr/data/sampler.py", line 161, in getitem
train_frames, train_anno, meta_obj_train = dataset.get_frames(seq_id, train_frame_ids, seq_info_dict)
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depthtrack.py", line 185, in get_frames
frame_list = [self._get_frame(depth_path, f_id) for ii, f_id in enumerate(frame_ids)]
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depthtrack.py", line 185, in
frame_list = [self._get_frame(depth_path, f_id) for ii, f_id in enumerate(frame_ids)]
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depthtrack.py", line 158, in _get_frame
img = get_rgbd_frame(color_path, depth_path, dtype=self.dtype, depth_clip=True)
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depth_utils.py", line 15, in get_rgbd_frame
rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.8.1) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

  • dataset/DepthTrack/train/toy07_indoor_320/color/00001385.jpg is not exits!
    thank you if you could help me

@JiaLianjie
Copy link

Hello, I try to run DeT_ATOM_Max. But I meet some problems when the File "det/ltr/dataset/depthtrack.py" runs 'frame_list = [self._get_frame(depth_path, f_id) for ii, f_id in enumerate(frame_ids)]'(in line 185). It looks like there are some problems in DepthTrack dataset(toy07_indoor_320)

[train: 1, 1 / 1000] FPS: 7.4 (7.4) , Loss/total: 1.04415 , Loss/iou: 1.04415
[train: 1, 2 / 1000] FPS: 13.2 (59.7) , Loss/total: 0.95544 , Loss/iou: 0.95544
[train: 1, 3 / 1000] FPS: 14.5 (18.3) , Loss/total: 1.01399 , Loss/iou: 1.01399
[train: 1, 4 / 1000] FPS: 15.5 (19.0) , Loss/total: 1.00854 , Loss/iou: 1.00854
[train: 1, 5 / 1000] FPS: 16.1 (19.1) , Loss/total: 0.97844 , Loss/iou: 0.97844
[train: 1, 6 / 1000] FPS: 16.5 (19.2) , Loss/total: 0.88426 , Loss/iou: 0.88426
[train: 1, 7 / 1000] FPS: 16.9 (19.3) , Loss/total: 0.82712 , Loss/iou: 0.82712
[train: 1, 8 / 1000] FPS: 16.8 (16.3) , Loss/total: 0.78724 , Loss/iou: 0.78724
[train: 1, 9 / 1000] FPS: 16.9 (18.0) , Loss/total: 0.75835 , Loss/iou: 0.75835
[train: 1, 10 / 1000] FPS: 17.1 (19.2) , Loss/total: 0.73059 , Loss/iou: 0.73059
[train: 1, 11 / 1000] FPS: 17.2 (18.2) , Loss/total: 0.71331 , Loss/iou: 0.71331
[train: 1, 12 / 1000] FPS: 17.4 (19.8) , Loss/total: 0.69950 , Loss/iou: 0.69950
[train: 1, 13 / 1000] FPS: 17.5 (18.9) , Loss/total: 0.67931 , Loss/iou: 0.67931
[train: 1, 14 / 1000] FPS: 17.6 (18.6) , Loss/total: 0.65817 , Loss/iou: 0.65817
[train: 1, 15 / 1000] FPS: 17.6 (18.0) , Loss/total: 0.64252 , Loss/iou: 0.64252
[train: 1, 16 / 1000] FPS: 17.7 (19.6) , Loss/total: 0.62998 , Loss/iou: 0.62998
[train: 1, 17 / 1000] FPS: 17.8 (19.2) , Loss/total: 0.61434 , Loss/iou: 0.61434
[train: 1, 18 / 1000] FPS: 17.8 (18.4) , Loss/total: 0.59853 , Loss/iou: 0.59853
[train: 1, 19 / 1000] FPS: 17.9 (19.3) , Loss/total: 0.58180 , Loss/iou: 0.58180
[train: 1, 20 / 1000] FPS: 18.0 (19.0) , Loss/total: 0.56429 , Loss/iou: 0.56429
[ WARN:0@73.814] global loadsave.cpp:248 findDecoder imread_('/root/siton-gpfs-archive/leiheao/dataset/DepthTrack/train/toy07_indoor_320/color/00001385.jpg'): can't open/read file: check file path/integrity
[train: 1, 21 / 1000] FPS: 18.0 (18.8) , Loss/total: 0.55770 , Loss/iou: 0.55770
Training crashed at epoch 1
Traceback for the error!
Traceback (most recent call last):
File "/root/siton-gpfs-archive/leiheao/det/ltr/trainers/base_trainer.py", line 70, in train
self.train_epoch()
File "/root/siton-gpfs-archive/leiheao/det/ltr/trainers/ltr_trainer.py", line 82, in train_epoch
self.cycle_dataset(loader)
File "/root/siton-gpfs-archive/leiheao/det/ltr/trainers/ltr_trainer.py", line 53, in cycle_dataset
for i, data in enumerate(loader, 1):
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 530, in next
data = self._next_data()
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
data.reraise()
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/_utils.py", line 457, in reraise
raise exception
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/envs/lha-env/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/siton-gpfs-archive/leiheao/det/ltr/data/sampler.py", line 161, in getitem
train_frames, train_anno, meta_obj_train = dataset.get_frames(seq_id, train_frame_ids, seq_info_dict)
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depthtrack.py", line 185, in get_frames
frame_list = [self._get_frame(depth_path, f_id) for ii, f_id in enumerate(frame_ids)]
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depthtrack.py", line 185, in
frame_list = [self._get_frame(depth_path, f_id) for ii, f_id in enumerate(frame_ids)]
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depthtrack.py", line 158, in _get_frame
img = get_rgbd_frame(color_path, depth_path, dtype=self.dtype, depth_clip=True)
File "/root/siton-gpfs-archive/leiheao/det/ltr/dataset/depth_utils.py", line 15, in get_rgbd_frame
rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.8.1) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

  • dataset/DepthTrack/train/toy07_indoor_320/color/00001385.jpg is not exits!
    thank you if you could help me

You can check the datasets, and you'll find that something lost indeed. I deleted some incomplete categories and it works, but it definitely have an effect on metrics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants