Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3. Pytorch nn 과 nn.functional #3

Open
onjsdnjs opened this issue May 28, 2019 · 0 comments
Open

3. Pytorch nn 과 nn.functional #3

onjsdnjs opened this issue May 28, 2019 · 0 comments

Comments

@onjsdnjs
Copy link

onjsdnjs commented May 28, 2019

필요 패키지

import torch.nn as nn
import torch.nn.functional as F

문서


nn 과 nn.functional

제공하는 기능들은 비슷한데 사용하는 방법에 차이가 있습니다.

nn 이 제공하는 기능들

  • Parameters
  • Containers
  • Conv
  • Pooling
  • Padding
  • Non-linear Activation
  • Normalization
  • Recurrent
  • Linear
  • Dropout
  • Sparse
  • Distance
  • Loss
  • Vision
  • Data Parallel
  • Utilities
  • ...

nn.functional 이 제공하는 기능들

  • Conv
  • Pooling
  • Non-linear Activation
  • Normalization
  • Linear function(=fully connected layer)
  • Dropout
  • Loss
  • Vision
  • ...

conv2d의 사용

nn 과 nn.functional의 차이를 conv2d를 이용해서 알아봅시다.

nn.conv2d
nn.functionnal.conv2d

torch.nn.Conv2d VS torch.nn.functional.Conv2d

입력 이미지에 2d Convolution을 적용시키는 메소드 입니다.

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

nn.Conv2d

weight를 직접 선언해주지 않음

  • in_channels : input image 채널의 수
  • out_channels : convolution 결과 채널의 수
  • kernel_size : convolution 커널의 크기(필터 크기)

torch.nn.functional.conv2d

weight를 직접 선언 ( 외부에서 만든 filter를 사용 )

  • input : input Tensor ( minibatch×in_channels×iH×iW )
  • weight : 필터의 shape ( out_channels × groups in_channels × kH × kW )

예제

아래 그림과 같은 convolution 연산을 pytorch로 진행해 보겠습니다.

1. torch.nn.functional.conv2d

이미지와 같은 input, filter 텐서를 생성하고 conv2d 메소드로 연산한 코드와 결과입니다.

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np

input = torch.Tensor(np.array([[[
	[1,1,1,0,0],
	[0,1,1,1,0],
	[0,0,1,1,1],
	[0,0,1,1,0],
	[0,1,1,0,0]
]]]))

filter = torch.Tensor(np.array([[[
	[1,0,1],
	[0,1,0],
	[1,0,1]
]]]))

input = Variable(input, requires_grad=True)
filter = Variable(filter)

out = F.conv2d(input, filter)
print(out)
tensor([[[[4., 3., 4.],
          [2., 4., 3.],
          [2., 3., 4.]]]], grad_fn=<ThnnConv2DBackward>)
[Finished in 0.5s]

2. torch.nn.conv2d

nn.conv2d는 weight를 제공해 주기 때문에 filter를 넘기지 않았습니다.
결과 출력 뿐만 아니라 사용된 weight도 같이 출력하겠습니다.
(호출할 때마다 결과가 달라집니다.)

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np

input = torch.Tensor(np.array([[[
	[1,1,1,0,0],
	[0,1,1,1,0],
	[0,0,1,1,1],
	[0,0,1,1,0],
	[0,1,1,0,0]
]]]))

input = Variable(input, requires_grad=True)

func = nn.Conv2d(1,1,3)
print(func.weight)

out = func(input)
print(out)
Parameter containing:
tensor([[[[-0.0219,  0.0051,  0.0586],
          [ 0.1440,  0.3288, -0.2014],
          [ 0.0100,  0.1854, -0.1580]]]], requires_grad=True)
tensor([[[[ 0.0342,  0.3050,  0.5113],
          [-0.2727,  0.2196,  0.4731],
          [-0.0924,  0.4095,  0.5477]]]], grad_fn=<ThnnConv2DBackward>)
[Finished in 0.5s]

참고: 김군이(https://www.youtube.com/watch?v=VKhFeh92eps)

@djkim1991 djkim1991 changed the title Pytorch nn 과 nn.functional 3. Pytorch nn 과 nn.functional May 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant