Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no torch.nn.functional.normalize #5

Closed
eeric opened this issue Jun 15, 2017 · 7 comments
Closed

no torch.nn.functional.normalize #5

eeric opened this issue Jun 15, 2017 · 7 comments

Comments

@eeric
Copy link

eeric commented Jun 15, 2017

AttributeError: 'module' object has no attribute 'normalize'

File "/home/yq/work/face_class/diracnets/diracnet.py", line 97, in block
w = beta * F.normalize(w.view(w.size(0), -1)).view_as(w) + alpha * delta

@szagoruyko
Copy link
Owner

update your pytorch

@eeric
Copy link
Author

eeric commented Jun 15, 2017

unfortunately, I didn't look up it. Could you provide the link address?

@eeric
Copy link
Author

eeric commented Jun 15, 2017

def normalize(input, p=2, dim=1, eps=1e-12):
return input / input.norm(p, dim).clamp(min=eps).expand_as(input)
did the above function be for torch.nn.functional.normalize?

@szagoruyko
Copy link
Owner

@eeric
Copy link
Author

eeric commented Jun 15, 2017

the function was same that your provided, but the test accu was lower on CIFAR-10 set

the parameter named 'True' of input.norm was deleted, program was implemented successfully, as following
def normalize(input, p=2, dim=1, eps=1e-12):
return input / input.norm(p, dim).clamp(min=eps).expand_as(input),

modified as that w = beta * normalize(w.view(w.size(0), -1)).view_as(w) + alpha * delta

the training process below:
==> id: ./log/result (3/200), test_acc: 50.49

and test_acc value more volatile.

Train_loss was be 'nan' on the 87th iteration and test_acc reached 83.7%. From next iteration to end the test_acc became 10.0% , why?

@szagoruyko
Copy link
Owner

szagoruyko commented Jun 16, 2017

@eeric why did you modify the code? It might diverge for two reasons:

  • your pytorch is old and has unfixed bugs
  • the modifications you made are incorrect.

Please make sure you're running the code with latest pytorch and no modifications to the code.

@eeric
Copy link
Author

eeric commented Jun 16, 2017

1.maybe, I attempted to modify lr=0.01 and delete F.normalize.
the result was as seem be better,as following
(60/200), test_acc: 92.86

2.my edition accroding to script below:
pip install http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
pip install torchvision
how to your edition?
3.modified in diracnet.py: w = beta * normalize(w.view(w.size(0), -1)).view_as(w) + alpha * delta, normalize function as below:
def normalize(input, p=2, dim=1, eps=1e-12):
return input / input.norm(p, dim).clamp(min=eps).expand_as(input),
the parameter named 'True' of input.norm was deleted.
lr=0.01
during 200 iteration, train_loss wasn't be 'nan', and the highest value of test_acc reached 94.22%.

@eeric eeric closed this as completed Jun 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants