-
-
Notifications
You must be signed in to change notification settings - Fork 95
Description
The library is inconsistent in nn_primitives for when to use return values and when to use var parameters that will be updated in place.
var parameter
Arraymancer/src/nn_primitives/nnp_linear.nim
Lines 20 to 28 in 92aa82e
| proc linear*[T](input, weight: Tensor[T], bias: Tensor[T], output: var Tensor[T]) {.inline.} = | |
| # Linear (Dense) forward primitive with bias | |
| # - input tensor shape [batch_size, in_features] | |
| # - weight tensor shape [out_features, in_features] | |
| # - bias tensor shape [batch_size, out_features] | |
| # Output does not need to be initialized to 0 or the proper shape, data will be overwritten | |
| # Output is: Y = x * W.transpose + b | |
| output = input * weight.transpose | |
| output .+= bias |
Arraymancer/src/nn_primitives/fallback/conv.nim
Lines 19 to 20 in 92aa82e
| proc im2col[T]( input: Tensor[T], kernel_size: Size2D, | |
| padding: Size2D = (0,0), stride: Size2D = (1,1), result: var Tensor[T]) = |
Arraymancer/src/nn_primitives/nnp_convolution.nim
Lines 65 to 70 in 92aa82e
| proc conv2d_backward*[T](input, weight, bias: Tensor[T], | |
| padding: Size2D, | |
| stride: Size2D, | |
| grad_output: Tensor[T], | |
| grad_input, grad_weight, grad_bias: var Tensor[T], | |
| algorithm = Conv2DAlgorithm.Im2ColGEMM) = |
return value
Arraymancer/src/nn_primitives/fallback/conv.nim
Lines 57 to 59 in 92aa82e
| proc col2im*[T](input: Tensor[T], channels, height, width: int, | |
| kernel_size: Size2D, | |
| padding: Size2D = (0,0), stride: Size2D = (1,1)): Tensor[T] = |
Arraymancer/src/nn_primitives/nnp_convolution.nim
Lines 28 to 31 in 92aa82e
| proc conv2d*[T](input, weight, bias: Tensor[T], | |
| padding: Size2D = (0,0), | |
| stride: Size2D = (1,1), | |
| algorithm = Conv2DAlgorithm.Im2ColGEMM): Tensor[T] {.inline.} = |
Possibilities
-
Always use
var- 👍 Makes sure there is no extra allocation
- 👍 Consistency
- 👎 Must declare result before calling function (and computes the shape for
im2colfor example) - 👎 No chaining
-
Always use return values
- 👍 Consistency
- 👍 No call burden
- 🔢 Sometimes return value is a tuple (i.e. forces to read doc for unwrapping)
- Can be alleviated with named tuple parameters
- If tuple, it's not easy to chain functions (but is it needed for nn_primitives?)
-
Return value if single,
varparameters otherwise?- 👎 Inconsistent
- The worst of both worlds?