This is a Pytorch-like neural network library written in C. Across the library, vectors are treated as 1 x n matrices. A wrapper over the GSL library is written in matrix.c with invoking syntax similar to Numpy. Sample examples are available in models directory.
nnlibc can be compiled to WebAssembly and run on the SilverLineFramework Linux Runtime or the Wasmer runtime.
The GNU Scientific Library is used for matrix and vector operations and a matrix.c wrapper can be invoked to perform those operations.
Please follow the steps given HERE.
Install Emscripten.
Download the 'Current Stable Version' from the GSL webpage.
In the downloaded GSL folder, say gsl-2.7.1, run the following commands
emconfigure ./configure
emmake make LDFLAGS=-all-static
sudo make install
GSL will be installed at /opt/gsl-2.7.1 with WASM executables.
gcc -Wall *.c -lm -lgsl -lgslcblas -o Output
emcc *.c -o Output.wasm -I/opt/gsl-2.7.1/include -L/opt/gsl-2.7.1/lib -lgsl -lm -lgslcblas -lc -s STANDALONE_WASM
-
Install Wasmer using the following command.
curl https://get.wasmer.io -sSfL | sh
Run Wasm output file
wasmer Output.wasm
-
Follow the Setup steps here.
Running the Wasm output file,
Open 4 terminal windows.
Terminal 0: MQTT
mosquitto
Terminal 1: Orchestrator
cd orchestrator/arts-main make run
Terminal 2: Linux Runtime
In this example, 'dir' is at '/home/ritzdevp/nnlibc'; replace it with the path of the neural network library.
./runtime-linux/runtime --host=localhost:1883 --name=test --dir=/home/ritzdevp/nnlibc --appdir=/home/ritzdevp/nnlibc
Terminal 3: Run
python3 libsilverline/run.py --path Output.wasm --runtime test
The output will be visible in Terminal 2.
Referece: Random Number Distribtion
const gsl_rng_type * T;
gsl_rng * rng;
gsl_rng_env_setup();
T = gsl_rng_default;
rng = gsl_rng_alloc(T);
This library reads the training and testing data from .dat files. The np2dat.py file can be used to convert a numpy file to a .dat file which can be read into a gsl_matrix* by calling the load_data() function. Here is an example, Consider that you have a numpy file in data/mnist_mini/x_train.npy. Convert this numpy file to .dat using the following command
python3 np2dat.py data/mnist_mini/x_train.npy
The file data/mnist_mini/x_train.dat should be visible now. The data can be loaded into a gsl_matrix as shown below.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "data.h"
int main(){
int train_len = 2000;
//Note: 28x28 image is already flattened to 784 in the data
gsl_matrix* x_train = load_data("data/mnist_mini/x_train.dat", train_len, 784);
x_print_shape(x_train);
return 0;
}
The output will be
shape = (2000, 784)
MLP with 2 hidden layers and one output layer, hence 3 total layers. The first hidden layer has 784 inputs that are connected to 512 neurons. The layer index for this layer is 0. Then, sigmoid activation is applied at layer index 1. Similarly, for the second hidden layer.
Xnet* mynet = Xnet_init(3);
//Hidden Layer 1
Linear* lin_layer1 = linear_init(784,512,0, rng);
xnet_add(mynet, lin_layer1);
Activation* act1 = Act_init("sigmoid", 1);
xnet_add(mynet, act1);
//Hidden Layer 2
Linear* lin_layer2 = linear_init(512,512,2, rng);
xnet_add(mynet, lin_layer2);
Activation* act2 = Act_init("sigmoid", 3);
xnet_add(mynet, act2);
//Output Layer
Linear* lin_layer3 = linear_init(512,10,4, rng);
xnet_add(mynet, lin_layer3);
Activation* act3 = Act_init("identity", 5);
xnet_add(mynet, act3);
The loop runs for 3 epochs, over 1000 data points. Gradients are zeroed out for the new iteration. The output is generated by a forward pass in the network followed by a backward pass. The weights are updated in each iteration.
int num_epochs = 3;
for (int epoch = 0; epoch < num_epochs; epoch++){
for (int i = 0; i < 1000; i++){
net_zero_grad(mynet);
gsl_matrix* input = get_row(x_train, i);
gsl_matrix* output = net_forward(input, mynet);
gsl_matrix* desired = get_row(y_train, i);
net_backward(desired, mynet);
net_step(mynet, 0.01); //lr=0.01
}
printf("Epoch %d done.\n", epoch);
}
-
Copy the contents in models/xor.c and paste in playground.c Compile and run The output should be
XOR MLP Epoch 0 Loss 0.696. Epoch 100 Loss 0.527. Epoch 200 Loss 0.352. Epoch 300 Loss 0.334. Epoch 400 Loss 0.327. 0.000000 0.000000 Out = 0 0.000000 1.000000 Out = 1 1.000000 0.000000 Out = 1 1.000000 1.000000 Out = 0
-
Copy the contents in models/mnist.c and paste in playground.c Compile and run. The output should be
shape = (2000, 784) shape = (2000, 10) shape = (100, 784) shape = (100, 10) Designing the model Epoch 0 done. Epoch 1 done. Epoch 2 done. Accuracy Percentage = 65.000