Skip to content

Program module for parallel training of convolutional neural networks using DL4J & CUDA 🦄

License

Notifications You must be signed in to change notification settings

liashchynskyi/neuronix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neuronix is a program module was developed for biomedical image classification using GPU and convolutional neural networks. Built with DL4J.

Features

  • Building your own CNN model or use pre-trained
  • Saving/loading model to/from json file
  • Separated classes to train and test your model
  • Tune your model so as you want

Note: The project uses RGB images of 224x224 pixels in JPG format. If you want you may use this converter from PNG to JPG.

If you want to add something else — all feature requests and contributions are welcome!

System requirements

To use this project, your computer must appropriates to these minimal requirements:

  • Dual Core CPU, 2.5 Ghz (something like AMD Athlon 64 x2 4800+ or faster)
  • 4 GB of RAM (you can even use DDR2 RAM but it's slightly slower)
  • GPU with 2 GB of memory and CUDA 8.0 support

And that's it 😉

Installation

Before you begin need to install the following software on your computer:

  1. CUDA 8.0 library (download here)
  2. cuDNN v6.0 library for CUDA 8.0 (download here)
  3. Java 8 (1.8.0_101 or newer, download here)

If you have AMD CPU see this and this to avoid mistakes 😏 Then download JAR from releases page and add it to your project.

How to build a model?

You can build your own models by this way.

import neuronix.models.json.Layer;
import neuronix.models.json.Model;
import neuronix.utils.Utils;

public class Test {
    public static void main (String[] args) throws IOException {
        Model model = new Model();
        model.setModelName("CoolModel");
        model.setImageSize(224);
        model.setChannels(3);
        model.setBatchSize(5);
        model.setSeed(42);
        model.setIterations(1);
        model.setRegularization(true);
        model.setL2(1e-54);
        model.setLearningRate(1e-7);
        model.setNumLabels(5); //number of output classes your model can predict
        model.setMiniBatch(true);
        model.setActivation("relu");
        model.setWeightInit("relu");
        model.setGradientNormalization("RenormalizeL2PerLayer");
        model.setOptimizationAlgo("STOCHASTIC_GRADIENT_DESCENT");
        model.setUpdater("nesterovs");
        model.setMomentum(0.9);

        Layer initial = new Layer();
        initial.setId(0);
        initial.setType("init");
        initial.setName("cnn1");
        initial.setOut(50);
        initial.setKernel(new int[]{5, 5});
        initial.setStride(new int[]{1, 1});
        initial.setPadding(new int[]{0, 0});
        initial.setBias(0);

        Layer pool1 = new Layer();
        pool1.setId(1);
        pool1.setType("pool");
        pool1.setName("maxpool1");
        pool1.setKernel(new int[]{2, 2});
        pool1.setStride(new int[]{2, 2});

        Layer conv2 = new Layer();
        conv2.setId(2);
        conv2.setType("conv");
        conv2.setName("cnn1");
        conv2.setOut(100);
        conv2.setKernel(new int[]{5, 5});
        conv2.setStride(new int[]{1, 1});
        conv2.setPadding(new int[]{0, 0});
        conv2.setBias(0);

        Layer dense = new Layer();
        dense.setId(3);
        dense.setType("dense");
        dense.setOut(500);
        dense.setActivation("relu");

        Layer output = new Layer();
        output.setId(4);
        output.setType("output");
        output.setOut(5); //number of output classes your model can predict
        output.setActivation("softmax");
        output.setLoss("NEGATIVELOGLIKELIHOOD");

        model.addLayer(initial);
        model.addLayer(pool1);
        model.addLayer(conv2);
        model.addLayer(dense);
        model.addLayer(output);

        String modeljson = Utils.encodeJson(model);
        Utils.writeJSON("path", modeljson);
    }
}

Or you can load a previously created model as shown here. After that you can build your model:

JsonModelBuilder  builder = new JsonModelBuilder(model);
MultiLayerNetwork network = builder.init(0, 0).build();

Configuration

Also you can define the following params by using a Prefs class.

Prefs.setCurrentLoadDir("path"); //where json models are stored
Prefs.setCurrentSaveDir("path"); //where trained .bin models are stored
Prefs.setCurrentSaveState(true); //if true - your model will be saved after training
Prefs.setCurrentWorkspaceState(false); // if true - set SINGLE workspace mode

More about workspaces.

Training

Trainer trainer = new Trainer(200, 1, 1e-3, 80);
trainer.setImagesPath('path/to/your/images/jpg');
trainer.setPathToNeuralNetModel('your/json/model');
trainer.setRandomSeed(42);
double[] results = trainer.train();

Classification

Classifier classifier = new Classifier("path/to/images", "savedModelNameWithoutBinExtension", new Random(42));
ObservableList<ClassificationResult> results = classifier.classify();

Switching to GPU

Want to do it faster? Switch to GPU by setting system var BACKEND_PRIORITY_GPU to a higher value than BACKEND_PRIORITY_CPU.

Memory problems

If it throws an error try to run the program with these arguments (the values are based on your RAM size): -Xms128m -XX:ReservedCodeCacheSize=240m -XX:+UseConcMarkSweepGC -XX:SoftRefLRUPolicyMSPerMB=50 -ea -Dsun.io.useCanonCaches=false -Djava.net.preferIPv4Stack=true -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Dorg.bytedeco.javacpp.maxbytes=2500000000 -Dorg.bytedeco.javacpp.maxphysicalbytes=2500000000

Fallback mode

Recently, we’ve discovered that on some platforms, popular BLAS-libraries can be unstable, causing crashes under different circumstances. To address that (and possible future issues as well), we’ve provided optional «fallback mode», which will cause ND4J to use in-house solutions as a workaround for possible issues. It acts as a «safe mode», well known for any modern-OS user. To activate fallback mode you only need to set special environment variable: ND4J_FALLBACK. Set it to «true» or to 1 before launching your app. It’s possible to use this variable in an Apache Spark environment, as well as in a standalone app. It is also possible to lower the level of optimization used by OpenBLAS, which is sometimes known to cause problems, by setting the OPENBLAS_CORETYPE environment variable to a value such as Athlon (for AMD processors) or Core2 (for Intel processors).