Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows compat #6

Open
pwang724 opened this issue Feb 25, 2021 · 14 comments
Open

windows compat #6

pwang724 opened this issue Feb 25, 2021 · 14 comments

Comments

@pwang724
Copy link

Hi! Been trying to run this on windows and realized there are a bunch of code that operates on unix file separators, i.e.

line 597 in dataset.py
def extract_frame_num(img_path):
return int(img_path.rsplit('\', 1)[-1][3:].split('.')[0])

Could we get a simple fix (i.e. os.path.split/os.path.join) to make file separators platform-independent?

@waq1129
Copy link
Collaborator

waq1129 commented Feb 25, 2021

Hi

Thanks for your interest in our model. Sure, I will try to fix this error with split and join as soon as possible. Thanks for the feedback!

Generally, we are not currently maintaining the code for the windows system due to the diverse python packages that could be used on windows. Sorry for this inconvenience. However, we do have DGP installed on NeuroCAAS which is a super cool platform that supports neuro data analysis with a simple drag-and-drop. This is way more convenient to use compared with installing the packages on your local machine.

@waq1129
Copy link
Collaborator

waq1129 commented Feb 25, 2021

Hey, I just fix all the path issues. Please let me know if you still run into any error.

@pwang724
Copy link
Author

Will do, thanks so much. Will also reach out and test the neurocaas framework as well. Much appreciated.

@wweertman
Copy link

DGP works on windows now can confirm.

Is there an assumption that all training data as the animal present in it? A missing animal and no labels seem to cause training to fail.

@waq1129
Copy link
Collaborator

waq1129 commented Mar 4, 2021

Hi, can you be more concrete about what do you mean by saying “A missing animal and no labels”. If the animal has never shown up in the training videos or a specific body part has never shown up, then it's hard to get it very accurate on the test videos. Also how does DLC perform? DGP should improve upon DLC.

@wweertman
Copy link

Hi, can you be more concrete about what do you mean by saying “A missing animal and no labels”. If the animal has never shown up in the training videos or a specific body part has never shown up, then it's hard to get it very accurate on the test videos. Also how does DLC perform? DGP should improve upon DLC.

hmm that was unclear.

I have a deeplabcut model trained on roughly 1600 labelled images, these images were drawn from a dataset of ~200 hours of footage. Because of the dataset size, it contains noise.. i.e., images missing animals, or images with human hands and no animals. filtering the dataset would take a prohibitively long period of time for one person to do.

Because of how I selected the images for training there were a number of images that did not have the animal of interest in it so I did not label said image. that image was kept in the dataset. Unfortunately, Deeplabcut does not make it easy to remove data from a dataset during labelling, which is when you find the noise, so it is easier be careful when adding data to prevent noise and leave noise in if you get it. Additionally, I found that it is helpful to have some noise examples in the dataset to reduce labelling error when noise is present, this does not seem to reduce the performance of deeplabcut for labelling animals.

when deepgraphpose is initiliazing the resnet. It seems to do so iteratively for each datafolder which is absurdly slow.. in my case around 300 folders..

...
Initializing ResNet
1781_1912_fc
[90 97]
Initializing ResNet
178_317_fc
[65 69 90]
Initializing ResNet
18149_18214_fc
[13 51]
...

it gets hung up and fails whenever a folder without labelled data is found...

Initializing ResNet
23250_23271_fc
[]
Traceback (most recent call last):
  File "demo/run_dgp_demo.py", line 209, in <module>
    step=1)
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\models\fitdgp.py", line 340, in fit_dgp_labeledonly
    S0=S0)
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\dataset.py", line 860, in __init__
    self.datasets.append(Dataset(video_file, self.dlc_config, self.paths))
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\dataset.py", line 336, in __init__
    assert len(idxs_train) > 0
AssertionError

I get that the beauty of deepgraphpose requires that there be labels present and there should always be an animal in the video. and that having noise will blow up the model and probably make it useless. Currently, I am finding and removing all the folders that contain noise from my deeplabcut dataset.

It seems like it would be reasonable to simply not use a folder for training with deepgraphpose if there is an AssertionError. i.e., using a boolean and a try statement or something like that to verify if the error occurs.

@waq1129
Copy link
Collaborator

waq1129 commented Mar 5, 2021

Hi @wweertman, thanks for this feedback. We will look into these two issues: 1. slow initialization; 2. assertion error. I will let you know as soon as we finish. Sorry for these errors.

@wweertman
Copy link

Hi @wweertman, thanks for this feedback. We will look into these two issues: 1. slow initialization; 2. assertion error. I will let you know as soon as we finish. Sorry for these errors.

beautiful!

I have also been getting the assertation error from videos that only have one or two missing bodyparts.. i.e., occluded bodyparts.

@wweertman
Copy link

Hi @wweertman, thanks for this feedback. We will look into these two issues: 1. slow initialization; 2. assertion error. I will let you know as soon as we finish. Sorry for these errors.

beautiful!

I have also been getting the assertation error from videos that only have one or two missing bodyparts.. i.e., occluded bodyparts.

image

I have also gotten assertion errors from frames with all bodyparts labeled.

@wweertman
Copy link

wweertman commented Mar 6, 2021

image

so here is another weird one that I don't totally get.

Initializing ResNet
1462_1562_fc
[31 53]



Creating training datasets
--------------------------
WARNING:py.warnings:C:\Users\wlwee\Anaconda3\envs\dlc-windowsGPU\lib\site-packages\moviepy\video\io\ffmpeg_reader.py:130: UserWarning: Warning: in file C:\Users\wlwee\Documents\python\follow_cam_models\MODEL\arms-weert-2021-01-17\videos\10069_10089_fc.mp4, 1080000 bytes wanted but 0 bytes read,at frame 20/21, at time 2.86/2.86 sec. Using the last valid frame instead.
  UserWarning)

Selected additional 1 hidden frames
Skipped 0 high motion energy (me) frames since in visible window or close to higher me hidden frame
Starting with standard pose-dataset loader.
Traceback (most recent call last):
  File "demo/run_dgp_demo.py", line 209, in <module>
    step=1)
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\models\fitdgp.py", line 368, in fit_dgp_labeledonly
    data_batcher.create_batches_from_resnet_output(snapshot, **batch_dict)
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\dataset.py", line 929, in create_batches_from_resnet_output
    dataset.create_batches_from_resnet_output(self.batch_info, self.paths['batched_data'])
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\dataset.py", line 413, in create_batches_from_resnet_output
    target_2d, target_idxs, _, _ = self._compute_targets()
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deepgraphpose\dataset.py", line 610, in _compute_targets
    data = dataset.next_batch()
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deeplabcut\deeplabcut\pose_estimation_tensorflow\dataset\pose_defaultdataset.py", line 145, in next_batch
    return self.make_batch(data_item, scale, mirror)
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deeplabcut\deeplabcut\pose_estimation_tensorflow\dataset\pose_defaultdataset.py", line 165, in make_batch
    image = imread(os.path.join(self.cfg.project_path,im_file), mode='RGB')
  File "c:\users\wlwee\documents\python\dgp_models\deepgraphpose\src\deeplabcut\deeplabcut\utils\auxfun_videos.py", line 18, in imread
    return cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(3.4.13) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-cg3xbgmk\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

the video is a short video 2 seconds. but not a symbolic link (which is the dlc default to create). I think the error is because one of the images labeled is the final frame in the video. This label was extracted using the deeplabcut kmeans tool, so it just so happened to be the final image.

It might be helpful for there to be a fairly detailed section in the Readme that contains all the assumptions about the structure of the data that deepgraphpose wants from deeplabcut so when we build our datasets using the deeplabcut tools we have a guide to enable compatibility with deepgraphpose.

@wweertman
Copy link

On the note above.. the reason for the short videos is this.. we only want to do tracking on the animals when they are in a specific region of our flume. we have tried training a deeplabcut model with a selection of all the data (animals on the walls, on rock, over exposed, etc.) and it performed worse on the data from the region we are interested in.

To overcome this we grab 'trajectories' when a animal crosses the area of the flume we are interested in. This resulted in many short videos. If say, we knew that deepgraphpose required videos of X length, and frames no more than N values from the final frame of the video we could take that into account when creating our deeplabcut models.

@waq1129
Copy link
Collaborator

waq1129 commented Mar 8, 2021

Hi @wweertman

I fix the assertion error for videos with no labels.

For the short video issue, the error occurs in deeplabcut package. It seems that this path "os.path.join(self.cfg.project_path,im_file)" doesn't exist. Is that true?

Can you print out os.path.join(self.cfg.project_path,im_file) before line 165 in https://github.com/paninski-lab/deepgraphpose/blob/main/src/DeepLabCut/deeplabcut/pose_estimation_tensorflow/dataset/pose_defaultdataset.py? to see whether the error is due to the missing file?

Best,
Anqi

@wweertman
Copy link

Hi @wweertman

I fix the assertion error for videos with no labels.

For the short video issue, the error occurs in deeplabcut package. It seems that this path "os.path.join(self.cfg.project_path,im_file)" doesn't exist. Is that true?

Can you print out os.path.join(self.cfg.project_path,im_file) before line 165 in https://github.com/paninski-lab/deepgraphpose/blob/main/src/DeepLabCut/deeplabcut/pose_estimation_tensorflow/dataset/pose_defaultdataset.py? to see whether the error is due to the missing file?

Best,
Anqi

def make_batch(self, data_item, scale, mirror):
        im_file = data_item.im_path
        logging.debug('image %s', im_file)
        logging.debug('mirror %r', mirror)

        #print(im_file, os.getcwd())
        #print(self.cfg.project_path)
        image = imread(os.path.join(self.cfg.project_path,im_file), mode='RGB')

        if self.has_gt:
            joints = np.copy(data_item.joints)

        if self.cfg.crop: #adapted cropping for DLC
            if np.random.rand()<self.cfg.cropratio:
                j=np.random.randint(np.shape(joints)[1]) #pick a random joint
                joints,image=CropImage(joints,image,joints[0,j,1],joints[0,j,2],self.cfg)
                '''
                print(joints)
                import matplotlib.pyplot as plt
                plt.clf()
                plt.imshow(image)
                plt.plot(joints[0,:,1],joints[0,:,2],'.')
                plt.savefig("abc"+str(np.random.randint(int(1e6)))+".png")
                '''
            else:
                pass #no cropping!

        img = imresize(image, scale) if scale != 1 else image
        scaled_img_size = arr(img.shape[0:2])
        if mirror:
            img = np.fliplr(img)

        batch = {Batch.inputs: img}

        if self.has_gt:
            stride = self.cfg.stride

            if mirror:
                joints = [self.mirror_joints(person_joints, self.symmetric_joints, image.shape[1]) for person_joints in
                          joints]

            sm_size = np.ceil(scaled_img_size / (stride * 2)).astype(int) * 2

            scaled_joints = [person_joints[:, 1:3] * scale for person_joints in joints]

            joint_id = [person_joints[:, 0].astype(int) for person_joints in joints]
            part_score_targets, part_score_weights, locref_targets, locref_mask = self.compute_target_part_scoremap(
                joint_id, scaled_joints, data_item, sm_size, scale)

            batch.update({
                Batch.part_score_targets: part_score_targets,
                Batch.part_score_weights: part_score_weights,
                Batch.locref_targets: locref_targets,
                Batch.locref_mask: locref_mask
            })

        batch = {key: data_to_input(data) for (key, data) in batch.items()}

        batch[Batch.data_item] = data_item

        return batch

lines 158 - 218
sorry for long response. did not have this thread set to watch.

thanks!
Willem

@waq1129
Copy link
Collaborator

waq1129 commented Mar 23, 2021

Oh, I mean whether the file at this path "os.path.join(self.cfg.project_path,im_file)" exists? If not, then
image = imread(os.path.join(self.cfg.project_path,im_file), mode='RGB') might return this error:
cv2.error: OpenCV(3.4.13) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-cg3xbgmk\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants