Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatible with Windows? #146

Closed
DavidGill159 opened this issue Apr 15, 2024 · 22 comments
Closed

Compatible with Windows? #146

DavidGill159 opened this issue Apr 15, 2024 · 22 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@DavidGill159
Copy link
Contributor

Hi, do you have plans to provide a Windows-compatible installation option? The installation instructions specify Linux compatibility only and I have now run out of credits for further use of the cloud version. Thanks in advance.

@themattinthehatt
Copy link
Collaborator

Hi @DavidGill159, thanks for your interest in lightning pose! Our package depends heavily on the NVIDIA DALI package, which does not run natively on Windows. However, you might try setting up the Windows Subsystem for Linux. You can see some responses from the DALI developers to this question in this DALI issue.

I am not aware of anybody using WSL with lightning pose at this time, but please let us know if you try it out!

@Wei21st
Copy link

Wei21st commented Apr 16, 2024

I used WSL to set up the lightning pose, it works great.

@themattinthehatt
Copy link
Collaborator

@Wei21st that's great to hear! Would you mind posting some more info about how you got it set up? I'd like to add that information to the documentation

@Wei21st
Copy link

Wei21st commented Apr 16, 2024

Actually it went very smoothly. Mostly I just followed the instruction on your document. Specifically I used Method 2: conda from source.
I think for other people if they want to use the WSL, they need to first setup the WSL according to the official website.
If anything, that is different from a well-established linux computer, is that I need to install extra packages before installing the lightning-pose.
These include:

sudo apt install python-is-python3 
sudo apt install libgl1-mesa-glx

After installation, the training and inference were without any issues when following the documentations.

Not sure if these will help. Actually because the process was so smooth I can't recall many details, which might be a good news for other users. I guess majority of the problem occur during installation is minor and perhaps consulting with a people knowing the linux system (this was how I did) will solve most of it.

@DavidGill159
Copy link
Contributor Author

DavidGill159 commented Apr 17, 2024

@Wei21st @themattinthehatt Hi, I have setup my WSL and ran those 2 sudo install lines in a Windows ubunta terminal. I have succeeded in the LP installation up until step 4. I receive this error:
image

@Wei21st
Copy link

Wei21st commented Apr 17, 2024

When you installed the all dependencies in step3, did it throw any errors? You may verify this by re-run step3.
By the way, I git-cloned the project within the linux system. that is in \wsl.localhost\Ubuntu\home\YOUR_USER_NAME\lightning-pose.

@DavidGill159
Copy link
Contributor Author

DavidGill159 commented Apr 18, 2024

When you installed the all dependencies in step3, did it throw any errors? You may verify this by re-run step3. By the way, I git-cloned the project within the linux system. that is in \wsl.localhost\Ubuntu\home\YOUR_USER_NAME\lightning-pose.

hey, no errors were thrown when initially running step3 or re-running it like you recommended. Yes I git-cloned within the linux system.

UPDATE: I created a virtual environment and redid step3 onswards from there and everything seems to have ran successfully now. See attached the test session results:

@Wei21st thanks for the help!

@themattinthehatt ->

  1. is there no GUI for the linux installed version?
  2. Provided these test session results are good... I am happy to put together a step-by-step guide for window users from WSL installation to here (while it is still all fresh in my mind haha)

image

@themattinthehatt
Copy link
Collaborator

themattinthehatt commented Apr 18, 2024

@DavidGill159 so glad you were able to get WSL working! Thanks @Wei21st for your help 🙏

We do in fact have a GUI, it is separate from the lightning-pose repo. You can find it here: https://github.com/Lightning-Universe/Pose-app
We are still working on providing a full feature set for the singleview case in the app, so we do not yet have a GUI that works with multiview data (specifically the labeling part). How have you been labeling your data? When you label a frame from one camera view are you labeling the corresponding frames from the other views as well?

Re: WSL installation steps, this would be very much appreciated!! Would you want to update the actual docs and do a Pull Request? That way you get the credit for your work. If so we can discuss what that looks like. Alternatively you can just send the steps to this issue and I can update the docs (with a pointer to the issue so you can get credit that way).

@DavidGill159
Copy link
Contributor Author

DavidGill159 commented Apr 18, 2024

we do not yet have a GUI that works with multiview data (specifically the labeling part).

Will it work for training and Inference?

How have you been labeling your data? When you label a frame from one camera view are you labeling the corresponding frames from the other views as well?

I have been using my labels from a deeplabcut project. The frames that are labelled are not consistent across cameras.

Re: WSL installation steps, this would be very much appreciated!! Would you want to update the actual docs and do a Pull Request? That way you get the credit for your work. If so we can discuss what that looks like. Alternatively you can just send the steps to this issue and I can update the docs (with a pointer to the issue so you can get credit that way).

Sure! I am happy to do a Pull Request.

@themattinthehatt
Copy link
Collaborator

themattinthehatt commented Apr 18, 2024

I have been using my labels from a deeplabcut project. The frames that are labelled are not consistent across cameras.

ahhh, so actually this changes things. the multiview updates that we've been working on are specifically designed to take advantage of labels across different views at the same time point. In your case, if you don't have this consistency, then you can just treat your project like a single-camera dataset, just as you've been doing with DLC (and the multi-view component doesn't come in until the last triangulation step). You can then use the Pose-app GUI as-is, and you'd just need to upload/label single videos at a time (making sure to include videos from different views).

Sure! I am happy to do a Pull Request.

Thank you! Let me get back to you about this soon.

@DavidGill159
Copy link
Contributor Author

ahhh, so actually this changes things. the multiview updates that we've been working on are specifically designed to take advantage of labels across different views at the same time point. In your case, if you don't have this consistency, then you can just treat your project like a single-camera dataset, just as you've been doing with DLC (and the multi-view component doesn't come in until the last triangulation step). You can then use the Pose-app GUI as-is, and you'd just need to upload/label single videos at a time (making sure to include videos from different views).

Ah I see. In that case, Im assuming TCN model wont be used?

@themattinthehatt
Copy link
Collaborator

You can still use the TCN! It only requires adjacent frames on a view-by-view basis. In order to use it though you'll need add the context frames to your LP project. So if you have a frame called labeled-data/vid_x_camera_1/frame0099.png (as an example) you will need to add frame0097.png, frame0098.png, frame0100.png, and frame0101.png from vid_x_camera_1 into the same folder. [and then set model.model_type: heatmap_mhcrnn in the config file]

Here's a function that we use in the app to perform this extraction: https://github.com/Lightning-Universe/Pose-app/blob/main/lightning_pose_app/backend/extract_frames.py#L272

@themattinthehatt
Copy link
Collaborator

So for the PR, you'll first have to make a fork of the repo, and then you can update the following file:
https://raw.githubusercontent.com/danbider/lightning-pose/main/docs/source/installation.rst

  1. update the first sentence to say "Linux or Windows (using WSL, see below)"
  2. after the installation bullet points and the line about docker users, add another line that says "If you are a Windows user, please [read this first](internal link to windows instructions)."
  3. add a section after "Docker users" called "Windows installation with WSL" (or something similar). Here you can put all of the steps necessary to install WSL. Once WSL is installed I think it would be best to just refer back to the beginning of the installation docs and say "now you can follow the instructions above using either [Method 1: pip package](internal link) or [Method 2: conda from source](internal link)". I guess you'll have to remind people to also install ffmpeg and conda beforehand too.

How does that sound?

@themattinthehatt themattinthehatt added question Further information is requested documentation Improvements or additions to documentation labels Apr 19, 2024
@DavidGill159
Copy link
Contributor Author

DavidGill159 commented Apr 19, 2024

Im trying to label videos in the installed app but when loading my videos, I get the error 'file must be 200mb or smaller'. My video files are AVIs so are generally quite big. do you plan on supporting larger files in the future?

I also noticed that when creating a project, it didn't detect tensorflow:
TensorFlow installation not found - running with reduced feature set.

@themattinthehatt
Copy link
Collaborator

Yes you can upload larger file sizes, you just need to set a flag on the command line before you launch the app; see the FAQ "How do I increase the file upload size limit?" here.

Re: the tensorflow installation, this is not a problem. Lightning Pose is built on pytorch, not tensorflow. We use tensorboard for visualization of training, which is actually agnostic to the underlying deep learning library. When tensorboard is launched it looks for a tensorflow installation, which enables additional features, but these are not features that we utilize.

@DavidGill159
Copy link
Contributor Author

DavidGill159 commented Apr 22, 2024

Great, thanks! I launch the 500mb app and upload my videos (24 x ~200MB .avi files) successfully. But when extracting the frames, frames are extracted for the first 6 videos without issue then the app crashes with a server error.
The extraction appears to continue in my terminal but upon completion I have no way of re-activating the app without losing the extracted data. Note, I am running app version 1.9.1. When uploading 12 videos instead of 24 I don't have this problem, so I assume it is an issue of volume? However, when proceeding to label frames after extracting from 12 videos, Heidi doesn't detect any projects.

@themattinthehatt
Copy link
Collaborator

@DavidGill159 would you mind posting this as a new issue in the Pose-app issues? I'll answer over there.

@DavidGill159
Copy link
Contributor Author

So for the PR, you'll first have to make a fork of the repo, and then you can update the following file: https://raw.githubusercontent.com/danbider/lightning-pose/main/docs/source/installation.rst

  1. update the first sentence to say "Linux or Windows (using WSL, see below)"
  2. after the installation bullet points and the line about docker users, add another line that says "If you are a Windows user, please [read this first](internal link to windows instructions)."
  3. add a section after "Docker users" called "Windows installation with WSL" (or something similar). Here you can put all of the steps necessary to install WSL. Once WSL is installed I think it would be best to just refer back to the beginning of the installation docs and say "now you can follow the instructions above using either [Method 1: pip package](internal link) or [Method 2: conda from source](internal link)". I guess you'll have to remind people to also install ffmpeg and conda beforehand too.

How does that sound?

Hi Matt,
I created a fork and merged it as requested, can you see it?

@themattinthehatt
Copy link
Collaborator

@DavidGill159 thanks for this!!

It looks like you opened the PR in your fork rather than the original lightning pose repo. Here are some instructions for how to create a PR across forks: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork

When successful you should see the open PR here: https://github.com/danbider/lightning-pose/pulls

@DavidGill159
Copy link
Contributor Author

My bad. It is there now! :)

@themattinthehatt
Copy link
Collaborator

I see it! Will take a look later this morning and get back to you 🙏

@themattinthehatt
Copy link
Collaborator

Updates merged, and docs are published: https://lightning-pose.readthedocs.io/en/latest/source/installation.html#
:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants