-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training reproduction is impossible (attached script) #35
Comments
Hello, what is your email? |
Hello, my email address is: |
Thank you for the insights into your training procedure. We follow a similar procedure for DCVC-HEM (finetuning on vimeo90k, training procedure according to TCM, cascaded loss, lr=1e-5, 7 frames, random quality in each training iteration) and also observe deterioration in RD-performance. Have you experimented with finetuning and observed the same results? 96 frames, YUV-PSNR, GOP 32 |
Do you mean you loaded the official model weights and fine-tuned it? The fact that it still results in degraded performance may indicate that the training strategy has been improved on DCVC-HEM! |
I am currently working on reproducing DCVC models (TCM, HEM, DC). I have implemented the training_step using pytorch_lightning as shown below.
However, the performance results after training are not satisfactory, and I observe the same phenomenon for all models.
If anyone has identified a similar pattern and has solutions, it would be great to work on it together!
Feel free to reach out to me via email. (I would also be happy to share minor modifications to the model classes.)
The text was updated successfully, but these errors were encountered: