Skip to content

ez-spark/Pyllab

Repository files navigation

Install

pip install Pyllab

Warning:

from pip you will download the corresponding .whl file according to your OS and python version. All the .whl files have been created from the compilation of C code. The files on pypi have been compiled for the x86_64 architecture with the extensions "-mavx2". So, if you have a pentium for example or a processor that does not support the extension avx2 you can find in the github repo in the releases the different .whl files with: no extensions, sse extension, sse2 extension, avx extension.

or

Build .whl package On Linux:

pip install -r requirements.txt
sh generate_wheel_unix.sh
  • On Linux specifically you have to fix the wheel package, use the container with the repair tool:
sudo docker run -i -t -v `pwd`:/io quay.io/pypa/manylinux1_x86_64 /bin/bash

when you are in run these lines:

cd io
sh repair_wheel_linux.sh

in the wheelhouse directory you have the fixed wheel package

Build .whl package on MacOS

pip install -r requirements.txt
sh generate_wheel_unix.sh
sh repair_wheel_macos.sh

Build .whl package on Windows

  • It is a PIA. Why?

Pyllab is a cython library compiling .C files that use posix calls system. Now you can see the problem here. Just follow me in this journey:

  • First of all you need to comment this line in init.pyx:
ctypedef stdint.uint64_t uint64_t
  • Install Mingw with MYSYS2: https://www.msys2.org/ follow the steps and also the passages to install mingw-w64

  • now navigate to Pyllab with MYSYS2 and create the .lib library:

sh create_library.sh
  • Go to your Python Folder (We assume as example we are using python 3.7):

  • You can find your python folder with:

import sys

locate_python = sys.exec_prefix
print(locate_python)
  • Create the file Python37\Lib\distutils\distutils.cfg that should look like this:
[build]
compiler=mingw32
 
[build_ext]
compiler=mingw32
elif msc_ver == '1900':

change it to

elif msc_ver == '1916':
  • Now other bug fix: go to Python37\include\pyconfig.h and add these lines:
/* Compiler specific defines */


#ifdef __MINGW32__
#ifdef _WIN64
#define MS_WIN64
#endif
#endif
  • Now you can run from MYSYS2:
sh build_python_library.sh

Install .whl files

Once you have created the .whl file, you can install it locally using pip:

pip install package.whl

Import the library in python

import pyllab

Pyllab supports

  • Version: 1.0.2

  • Fully connected Layers

  • Convolutional Layers

  • Transposed Convolutional Layers

  • Residual Layers

  • Dropout

  • Layer normalization for Fully-connected Layers Transposed Convolutional and Convolutional Layers

  • Group normalization for Convolutional and Transposed Convolutional Layers

  • 2d Max Pooling

  • 2d Avarage Pooling

  • 2d Padding

  • Local Response Normalization for Fully-connected, Convolutional, Transposed Convolutional Layers

  • sigmoid function

  • relu function

  • softmax function

  • leaky_relu function

  • elu function

  • standard gd and sgd

  • Nesterov optimization algorithm

  • ADAM optimization algorithm

  • RADAM optimization algorithm

  • DiffGrad optimization algorithm

  • ADAMOD optimization algorithm

  • Cross Entropy Loss

  • Focal Loss

  • Huber Loss type1

  • Huber Loss type2

  • MSE Loss

  • KL Divergence Loss

  • Entropy Loss

  • Total Variational Loss

  • Contrastive 2D Loss

  • Edge Pop-up algorithm

  • Dueling Categorical DQN

  • Rainbow Training

  • Genetic Algorithm training (NEAT)

  • Multi Thread

  • Numpy input arrays

  • GPU Training and inference (Future implementation)

  • RNN

  • LSTM (Future implementation already tested in C)

  • Transformers (Future implementation semi-implemented in C)

  • Attention mechanism (Future implementation already tested in C)

  • Multi-head Attention mechanism (Future implementation already tested in C)

Genome API

import pyllab
# Init a genome from a .bin file
g = pyllab.Genome("file.bin", input_size, output_size)
# Get the output from an input list
inputs = [1]*input_size
output = g.ff(inputs)

DL Model API

import pyllab
# Init a model from a .txt file
model = pyllab.Model(pyllab.get_dict_from_model_setup_file("./model/model_023.txt"))
# select the training mode (default is standard training, otherwise you can choose edge oppup)
percentage_of_used_weights_per_layer = 0.5
model.set_training_edge_popup(percentage_of_used_weights_per_layer)
# select the multi-thread option
model.make_multi_thread(batch_size)
# select the loss function
model.set_model_error(pyllab.PY_FOCAL_LOSS,model.get_output_dimension_from_model(),gamma=2)
# init the optimization hyperparameters
train = pyllab.Training(lr = 0.01, momentum = 0.9,batch_size = batch_size,gradient_descent_flag = pyllab.PY_ADAM,current_beta1 = pyllab.PY_BETA1_ADAM,current_beta2 = pyllab.PY_BETA2_ADAM, regularization = pyllab.PY_NO_REGULARIZATION,total_number_weights = 0, lambda_value = 0, lr_decay_flag = pyllab.PY_LR_NO_DECAY,timestep_threshold = 0,lr_minimum = 0,lr_maximum = 1,decay = 0)
# train in supervised mode on a bunch of data
for i in range(epochs):
    # save the model in a binary file "i.bin"
    model.save(i)
    inputs, outputs = shuffle(inputs, outputs)
    for j in range(0,inputs.shape[0],batch_size):
        # compute feedforward, error, backpropagation
        model.ff_error_bp_opt_multi_thread(1, 28,28, inputs[j:j+batch_size], outputs[j:j+batch_size], model.get_output_dimension_from_model())
        # sum the partial derivatives over the batch
        model.sum_models_partial_derivatives()
        # update the model according to the optimization hyperparameters
        train.update_model(model)
        # update the optimization hyperparameters
        train.update_parameters()
        # reset the needed structures for another iteration
        model.reset()

Rainbow API

Look at the rainbow.py file in the test directory