beaverpy
is an implementation of PyTorch operators using only NumPy.
Implemented operators (their PyTorch equivalents) include the following:
- Layers
Conv2D
(torch.nn.Conv2d
)MaxPool2D
(torch.nn.MaxPool2d
)Linear
(torch.nn.Linear
)
- Loss/Distance Functions
MSELoss
(torch.nn.MSELoss
)CosineSimilarity
(torch.nn.CosineSimilarity
)
- Activations
ReLU
(torch.nn.ReLU
)Sigmoid
(torch.nn.Sigmoid
)Softmax
(torch.nn.Softmax
)
Note 1:
[n, c, h, w]
format is used
Note 2: Test code that checks for correctness of the implementation is included in respective notebooks and is also available as standalone
pytest
scripts
Conv2D
—stride
,padding
,dilation
,groups
MaxPool2D
—stride
,padding
,dilation
,return_indices
Linear
—bias
MSELoss
—reduction
CosineSimilarity
—dim
,eps
Softmax
—dim
pip3 install beaverpy
import beaverpy as bp
import numpy as np
in_channels = 6 # input channels
out_channels = 4 # output channels
kernel_size = (2, 2) # kernel size
_stride = (2, 1) # stride (optional)
_padding = (1, 3) # padding (optional)
_dilation = (2, 3) # dilation factor (optional)
_groups = 2 # groups (optional)
in_batches = 2 # input batches
in_h = 4 # input height
in_w = 4 # input weight
_input = np.random.rand(in_batches, in_channels, in_h, in_w)
conv2d = bp.Conv2D(in_channels, out_channels, kernel_size, stride = _stride, padding = _padding, dilation = _dilation, groups = _groups)
_output = conv2d.forward(_input) # perform convolution
In case you wish to provide your own kernel, then define the same and pass it as an argument to forward()
:
kernels = []
for k in range(out_channels):
kernel = np.random.rand(int(in_channels / _groups), kernel_size[0], kernel_size[1]) # define a random kernel based on the kernel parameters
kernels.append(kernel)
_output = conv2d.forward(_input, kernels) # perform convolution
in_channels = 3 # input channels
kernel_size = (6, 6) # kernel size
_stride = (1, 5) # stride (optional)
_padding = (1, 2) # padding (optional)
_dilation = (2, 1) # dilation factor (optional)
_return_indices = True # return max indices (optional)
in_batches = 3 # input batches
in_h = 11 # input height
in_w = 8 # input weight
_input = np.random.rand(in_batches, in_channels, in_h, in_w)
maxpool2d = bp.MaxPool2D(kernel_size, stride = _stride, padding = _padding, dilation = _dilation, return_indices = _return_indices)
_output = maxpool2d.forward(_input)
in_samples = 128 # input samples
in_features = 20 # input features
out_features = 30 # output features
_input = np.random.rand(in_samples, in_features)
linear = bp.Linear(in_features, out_features)
_output = linear.forward(_input)
In case you wish to provide your own weights and bias, then define the same and pass them as arguments to forward()
:
_weights = np.random.rand(out_features, in_features) # define random weights
_bias = np.random.rand(out_features) # define random bias
_output = linear.forward(_input, weights = _weights, bias_weights = _bias) # apply linear transformation
dimension = np.random.randint(500) # dimension of the input and target
_input = np.random.rand(dimension) # define a random input of the above dimension
_target= np.random.rand(dimension) # define a random target of the above dimension
mseloss = bp.MSELoss()
_output = mseloss.forward(_input, _target)
num_dim = np.random.randint(6) + 1 # number of input dimensions
shape = tuple(np.random.randint(5) + 1 for _ in range(num_dim)) # shape of input
_input1 = np.random.rand(*shape) # generate an input based on the dimensions and shape
_input2 = np.random.rand(*shape) # generate another input based on the dimensions and shape
_dim = np.random.randint(num_dim) # dimension along which CosineSimilarity is to be computed (optional)
_eps = np.random.uniform(low = 1e-10, high = 1e-6) # (optional)
cosinesimilarity = bp.CosineSimilarity(dim = _dim, eps = _eps)
_output = cosinesimilarity.forward(_input1, _input2)
_input = np.random.rand(10, 20, 3)
relu = bp.ReLU()
_output = relu.forward(_input)
_input = np.random.rand(10, 20, 3)
sigmoid = bp.Sigmoid()
_output = sigmoid.forward(_input)
_input = np.random.rand(1, 2, 1, 3, 4)
_dim = np.random.randint(len(_input)) # (optional)
softmax = bp.Softmax(dim = _dim)
_output = softmax.forward(_input)
- Replace
torch.round()
withnp.allclose()
for tests - Implement other operators
- Optimize code
This work is being done during my summer internship at DeGirum Corp., Santa Clara.
- If you are using this code in your projects, please make sure to cite this repository and the author
- If you find bugs, create a pull request with a description of the bug and the proposed changes
- Do have a look at the author's webpage for other interesting works!
README
last updated on 06/08/2023