This repository was archived by the owner on Jul 10, 2021. It is now read-only.

Description
It appears that per-sample weights end up mismatched with their samples when valid_size > 0.1. I've copied-in a minimal broken example. The weights array and the desired outputs are identical, but at the beginning of each batch, they are only guaranteed to match if valid_size == 0.0 - if valid_size > 0.1, they typically do not match.
I'll submit a PR when I've got a fix.
#!/usr/bin/env python3
import sknn.mlp
import numpy as np
def callback(event, **variables):
if event == 'on_batch_start':
print('yb', np.transpose(variables['yb']))
print('wb', variables['wb'])
print('-------------------------')
if __name__ == '__main__':
samples = 20
X = np.random.random_sample((20, 4))
y = np.arange(0, 20, dtype = np.float32)
y /= 20
w = y.copy()
layers = [sknn.mlp.Layer('Rectifier', units = 10),
sknn.mlp.Layer('Linear', units = 1)]
print('#######################################')
print(' With valid_size = 0.0 ')
print('#######################################')
net = sknn.mlp.Regressor(layers = layers, n_iter = 1, valid_size = 0.0,
callback = callback, batch_size = 4)
net.fit(X, y, w)
print('#######################################')
print(' With valid_size = 0.1')
print('#######################################')
net = sknn.mlp.Regressor(layers = layers, n_iter = 1, valid_size = 0.1,
callback = callback, batch_size = 4)
net.fit(X, y, w)