Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SSN model. #55

Merged
merged 4 commits into from
Aug 31, 2020
Merged

Add SSN model. #55

merged 4 commits into from
Aug 31, 2020

Conversation

JackyTown
Copy link
Contributor

No description provided.

@codecov
Copy link

codecov bot commented Jul 27, 2020

Codecov Report

Merging #55 into master will increase coverage by 0.86%.
The diff coverage is 97.25%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #55      +/-   ##
==========================================
+ Coverage   84.21%   85.08%   +0.86%     
==========================================
  Files          73       75       +2     
  Lines        4175     4418     +243     
  Branches      638      674      +36     
==========================================
+ Hits         3516     3759     +243     
+ Misses        554      549       -5     
- Partials      105      110       +5     
Flag Coverage Δ
#unittests 85.08% <97.25%> (+0.86%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmaction/models/localizers/ssn.py 91.66% <91.66%> (ø)
mmaction/models/heads/ssn_head.py 98.84% <98.84%> (ø)
mmaction/models/heads/__init__.py 100.00% <100.00%> (ø)
mmaction/models/localizers/__init__.py 100.00% <100.00%> (ø)
mmaction/models/localizers/base.py 57.14% <100.00%> (+19.64%) ⬆️
mmaction/models/localizers/bmn.py 99.00% <100.00%> (ø)
mmaction/models/localizers/bsn.py 100.00% <100.00%> (ø)
mmaction/models/losses/ssn_loss.py 100.00% <0.00%> (+1.85%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f183c1c...93f260e. Read the comment docs.

dim=1) / num_multiplier
if scale_factors is not None:
part_feat = (
part_feat * scale_factors.view(num_samples, 1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scale_factors[:, None]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried it, but failed, there was an shape error.

assert x.size(1) == self.feat_dim
num_ticks = proposal_ticks.size(0)

out_activity_scores = torch.zeros((num_ticks, self.activity_score_len),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zeros_like, so that both the dtype and device are correct

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.zeors_like() seems not suitable for this case.

scale_factor = 1.0

sum_parts = sum(stage_cfg)
tick_left = float(ticks[stage_idx])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it safe to remove float?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it is better to add float to tick_right. If tick_right is torch.Tensor(3, dtype=np.int32), then torch.arange(tick_left, tick_right+1e-5, num_parts) will not work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tick_right+1e-5 -> torch.Tensor(3, dtype=np.int32)

raw_scale_score = raw_score.mean(
dim=0) * scale_factor
out_scores[
index, :] += raw_scale_score.detach().cpu()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to move computations to gpu?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, it is in gpu

self.regressor_fc.out_features,
stpp_feat_multiplier, self.activity_fc.in_features).transpose(
0, 1).contiguous().view(-1, self.activity_fc.in_features)
reg_bias = self.regressor_fc.bias.data.view(1, -1).expand(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is expand().contiguous() equivalent to repeat() ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried it, but failed, there was an shape error.

@innerlee innerlee merged commit 1ae0122 into open-mmlab:master Aug 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants