-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add a config of TOFlow #811
Conversation
Codecov Report
@@ Coverage Diff @@
## master #811 +/- ##
==========================================
+ Coverage 83.04% 83.08% +0.03%
==========================================
Files 216 219 +3
Lines 12239 12354 +115
Branches 1975 2000 +25
==========================================
+ Hits 10164 10264 +100
- Misses 1767 1774 +7
- Partials 308 316 +8
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
] | ||
metrics[key] = dict( | ||
PSNR=metrics_data[0], SSIM=metrics_data[1]) | ||
except ValueError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will trigger a ValueError?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Supplementary information for the model, such as Pretrained SPyNet
in TOFlow.
# train | ||
train=dict( | ||
type='RepeatDataset', | ||
times=1000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why repeat 1000 times, instead of let IterBasedRunner iter over it internally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a safer approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wangruohui When "reloading" the dataset, it consumes additional time.
root_dir = 'data/vimeo_triplet' | ||
data = dict( | ||
workers_per_gpu=1, | ||
train_dataloader=dict(samples_per_gpu=1, drop_last=True), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How many GPUs by default does this config run on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One GPU is enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 GPU with batch size 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
interval=100, hooks=[ | ||
dict(type='TextLoggerHook', by_epoch=False), | ||
]) | ||
visual_config = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's this config for?
* [Feature] Add config of TOFlow * Update * Update * Update * Update * Update * Update * Update
* [Feature] Add config of TOFlow * Update * Update * Update * Update * Update * Update * Update
No description provided.