Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot save videos #34

Open
Huhaowen0130 opened this issue Dec 27, 2021 · 13 comments
Open

Cannot save videos #34

Huhaowen0130 opened this issue Dec 27, 2021 · 13 comments

Comments

@Huhaowen0130
Copy link

Huhaowen0130 commented Dec 27, 2021

Hello! When I tested MARS on my videos as below,
捕获
there seems to be something wrong with the saved video, the size is 1 KB:
image
May I ask how to save videos like this? Thank you!
image

@Huhaowen0130 Huhaowen0130 changed the title Error: CondaEnvException: Pip failed Cannot save videos Dec 30, 2021
@sw-dev-code
Copy link

I have a similar issue. Video is created with a size of 252 bytes and it isn't playable.

@Huhaowen0130
Copy link
Author

I have a similar issue. Video is created with a size of 252 bytes and it isn't playable.

Do you have any idea how to address it?

@annkennedy
Copy link
Contributor

annkennedy commented Jan 2, 2022

Sorry to hear you've been having issues with this - is this occurring on Linux or Windows?

We'll work to get this fixed, but in the meantime if you have Matlab you can also save video snippets and view your pose + annotation output using Bento: http://github.com/neuroethology/bentoMAT [edited to correct link]

@sw-dev-code
Copy link

@annkennedy Can you please double-check the link for bento, seems like a broken link?

The issue happens on Windows in my case. You can find the error log below.

animating056    0% --  [Elapsed Time: 0:00:00] |            | (ETA:  --:--:--)
'list' object has no attribute 'emit'
'list' object has no attribute 'emit'
Finished processing all the data in the queue!

@annkennedy
Copy link
Contributor

Apologies, that's http://github.com/neuroethology/bentoMAT

@Huhaowen0130
Copy link
Author

Sorry to hear you've been having issues with this - is this occurring on Linux or Windows?

We'll work to get this fixed, but in the meantime if you have Matlab you can also save video snippets and view your pose + annotation output using Bento: http://github.com/neuroethology/bentoMAT [edited to correct link]

I'm now trying BENTO, but I haven't found the way to save videos.

By the way, is MARS only designed for the case of a pair of mice? Can it be used to analyse the behavior of a single mouse?

@annkennedy
Copy link
Contributor

You can save movies with Bento by selecting File->Save Movie. After setting a filename, an interface will pop up with save options, allowing you to set the start+stop times of the saved clip. Make sure the encoding format you select in the interface matches the extension you selected when saving the file, and be sure not to resize the window while the movie is being generated.

MARS is designed for pairs of interacting mice, though if you have a single mouse you can always just discard the pose data for the animal you're not interested in (which will be random nonsense), and of course the behavior classifier output won't make sense. You can also train new mouse pose models + behavior classifiers for single-mouse conditions, using http://github.com/neuroethology/MARS_developer. We're working on a version of MARS that will let you specify number+type of animals to track+detect behaviors for, it's not yet ready for release but hopefully will be out soon.

@Huhaowen0130
Copy link
Author

You can save movies with Bento by selecting File->Save Movie. After setting a filename, an interface will pop up with save options, allowing you to set the start+stop times of the saved clip. Make sure the encoding format you select in the interface matches the extension you selected when saving the file, and be sure not to resize the window while the movie is being generated.

MARS is designed for pairs of interacting mice, though if you have a single mouse you can always just discard the pose data for the animal you're not interested in (which will be random nonsense), and of course the behavior classifier output won't make sense. You can also train new mouse pose models + behavior classifiers for single-mouse conditions, using http://github.com/neuroethology/MARS_developer. We're working on a version of MARS that will let you specify number+type of animals to track+detect behaviors for, it's not yet ready for release but hopefully will be out soon.

I see. Thank you for your reply!

@sw-dev-code
Copy link

@annkennedy Thank you so much for your help. Is there any way I can be notified when that new version of MARS is released?

@zhaojiachen1994
Copy link

@annkennedy Can you please double-check the link for bento, seems like a broken link?

The issue happens on Windows in my case. You can find the error log below.

animating056    0% --  [Elapsed Time: 0:00:00] |            | (ETA:  --:--:--)
'list' object has no attribute 'emit'
'list' object has no attribute 'emit'
Finished processing all the data in the queue!

I got the same problem when I run with the sample video. Do you have any idea how to address it?

@Archerfaded
Copy link

@annkennedy I use MARS on linux. output the video but there is no remarks for mounting and other information as shown in the previous figure.

@Archerfaded
Copy link

@annkennedy video like this no remarks
2022-08-11_153446
.

@ichbill
Copy link

ichbill commented Sep 5, 2022

I wrote a simple code to visualize the joints. Hope this can help you.

You need first to run the mars code to get predictions, and then run this code to visualize joints and output a video.
Run pip install tqdm to install tqdm if you haven't installed it.

import json
import cv2
import numpy as np
from tqdm import tqdm

# change video_path and pred_path here
video_path = 'sample_videos/sample_clip_1.mp4'
pred_path = 'sample_videos/output_v1_8/sample_clip_1'

output_path = os.path.join(pred_path, 'output.mp4')

color = [(0,0,255),(255,0,0)]

def drawline(image, data, pt1, pt2):
    image = cv2.line(image, (int(data[0][pt1]), int(data[1][pt1])), (int(data[0][pt2]), int(data[1][pt2])), thickness=1, color=(0,255,255))
    return image

video = cv2.VideoCapture(video_path)
video_data = []

while(video.isOpened()):
    ret, frame = video.read()
    if ret == False:
        break
    video_data.append(frame)

print(np.array(video_data).shape)

fourcc = cv2.VideoWriter_fourcc(*'MJPG')
fps = video.get(cv2.CAP_PROP_FPS)
size = (int(video.get(cv2.CAP_PROP_FRAME_WIDTH)),int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)))
out = cv2.VideoWriter(output_path, fourcc, fps, size)
video.release()

for files in sorted(os.listdir(pred_path)):
    print(files)
    if 'pose_top' in files and '.json' in files:
        with open(os.path.join(pred_path, files)) as f:
            pose_top = json.load(f)
        print(pose_top.keys())

        for keys in sorted(pose_top.keys()):
            print(keys, len(pose_top[keys]))
            if keys == 'keypoints':
                for i,frame in enumerate(tqdm(range(min(len(pose_top[keys]),len(video_data))))):
                    image = video_data[i]           
                    for j,instances in enumerate(pose_top[keys][frame]):
                        for keypoints in range(len(instances[0])):
                            image = cv2.circle(image, (int(instances[0][keypoints]),int(instances[1][keypoints])), radius=5, color=color[j], thickness=-1)
                        
                        image = drawline(image, instances, 0, 1)
                        image = drawline(image, instances, 0, 2)
                        image = drawline(image, instances, 1, 3)
                        image = drawline(image, instances, 2, 3)
                        image = drawline(image, instances, 3, 4)
                        image = drawline(image, instances, 3, 5)
                        image = drawline(image, instances, 4, 6)
                        image = drawline(image, instances, 5, 6)
                    out.write(image)
    print()

out.release()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants