Skip to content

Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.

License

Notifications You must be signed in to change notification settings

zyan-repository/track-anything-opencv-backend

 
 

Repository files navigation

track anything opencv backend

Modifications and Improvements

  • 2023/05/31: Added the SAM annotation functionality in sam_annotate.py to the project.

  • 2023/05/29: Added tracker to the OpenCV backend.

This project is a fork of Track-Anything made by gaomingqi. It incorporates the following modifications and improvements:

  1. Changed Visualization Framework: The project has switched from using Gradio to OpenCV for visual display. This offers a more direct and faster interface, providing improved speed and efficiency.

  2. Enhanced Annotation: In this modified version, users can perform video annotation easily and quickly within the OpenCV interface. This creates masks that serve as target annotations. If tracking errors occur, modifications can be made swiftly and conveniently. This annotation process was relatively slower in the Gradio-based version.

  3. Usage with OpenCV Backend: With the OpenCV backend, this project is user-friendly and powerful. Key operations include:

    • Press 'p' to pause, which allows for annotation.
    • While paused, you can annotate the image at the current 'Position'. Left-click for positive prompts and right-click for negative prompts.
    • Once annotation is completed, press 'g' to perform mask tracking (tracking continues till the 'End' frame).

Usage

Here are the steps to get started with this project:

  1. Step 1: Clone this repository to your local machine.

    git clone https://github.com/zyan-repository/track-anything-opencv-backend.git
  2. Step 2: Navigate to the project directory.

    cd track-anything-opencv-backend
  3. Step 3: Install the necessary dependencies.

    pip install -r requirements.txt
  4. Step 4: Run the project.

    python opencv_backend.py --video_path your_video_path --mask_dir where_you_want_to_save_masks
    # python opencv_backend.py --video_path your_video_path --mask_dir where_you_want_to_save_masks --sam_model_type vit_b  # for lower memory usage

Model Files

Please download the models using the links provided:

Once you have downloaded the model files, please place them in the checkpoints directory within your project path.

Please see below for the original README from the Track-Anything.



Track-Anything is a flexible and interactive tool for video object tracking and segmentation. It is developed upon Segment Anything, can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable Track-Anything to be suitable for:

  • Video object tracking and segmentation with shot changes.
  • Visualized development and data annotation for video object tracking and segmentation.
  • Object-centric downstream video tasks, such as video inpainting and editing.

🚀 Updates

  • 2023/05/02: We uploaded tutorials in steps 🗺️. Check HERE for more details.

  • 2023/04/29: We improved inpainting by decoupling GPU memory usage and video length. Now Track-Anything can inpaint videos with any length! 😺 Check HERE for our GPU memory requirements.

  • 2023/04/25: We are delighted to introduce Caption-Anything ✍️, an inventive project from our lab that combines the capabilities of Segment Anything, Visual Captioning, and ChatGPT.

  • 2023/04/20: We deployed DEMO on Hugging Face 🤗!

  • 2023/04/14: We made Track-Anything public!

🗺️ Video Tutorials (Track-Anything Tutorials in Steps)

huggingface_demo_operation.mp4

🕹️ Example - Multiple Object Tracking and Segmentation (with XMem)

qingming.mp4

🕹️ Example - Video Object Tracking and Segmentation with Shot Changes (with XMem)

curry_good_night_low.mp4

🕹️ Example - Video Inpainting (with E2FGVI)

inpainting.mp4

💻 Get Started

Linux & Windows

# Clone the repository:
git clone https://github.com/gaomingqi/Track-Anything.git
cd Track-Anything

# Install dependencies: 
pip install -r requirements.txt

# Run the Track-Anything gradio demo.
python app.py --device cuda:0
# python app.py --device cuda:0 --sam_model_type vit_b # for lower memory usage

📖 Citation

If you find this work useful for your research or applications, please cite using this BibTeX:

@misc{yang2023track,
      title={Track Anything: Segment Anything Meets Videos}, 
      author={Jinyu Yang and Mingqi Gao and Zhe Li and Shang Gao and Fangjing Wang and Feng Zheng},
      year={2023},
      eprint={2304.11968},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

👏 Acknowledgements

The project is based on Segment Anything, XMem, and E2FGVI. Thanks for the authors for their efforts.

About

Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • HTML 0.7%