Skip to content

Latest commit

 

History

History
7 lines (4 loc) · 925 Bytes

PRETRAIN.md

File metadata and controls

7 lines (4 loc) · 925 Bytes

Pre-training MGMAE

MGMAE is modified from VideoMAE V2. Please follow the pre-training instructions of VideoMAE V2 to learn how to pre-train the model.

When we run run_mgmae_pretraining.py as the scripts in the mgmae pre-train scripts folder, the pre-training will adapt the motion guided masking. MGMAE defines some new custom args and their meaning could be found in run_mgmae_pretraining.py.

To use the RAFT-small extracting optical flows, please download the raft-small-clean.pth and set --flow_model '/path/of/the/downloaded/raft-small-clean.pth' in the pre-training scripts. raft-small-clean.pth is modified from princeton-vl/RAFT/raft-small.pth