diff --git a/android-DL4JImageRecognitionDemo.md b/android-DL4JImageRecognitionDemo.md
new file mode 100644
index 000000000..c2708b194
--- /dev/null
+++ b/android-DL4JImageRecognitionDemo.md
@@ -0,0 +1,370 @@
+---
+title: Using Deeplearning4J in Android Applications - demo 2
+layout: default
+---
+# Using Deeplearning4J in Android Applications
+
+Contents
+
+* [Setting the Dependencies](#head_link1)
+* [Training and loading the Mnist model in the Android project resources](#head_link2)
+* [Accessing the trained model using an AsyncTask](#head_link7)
+* [Handling images from user input](#head_link3)
+* [Updating the UI](#head_link5)
+* [Conclusion](#head_link6)
+
+## DL4JImageRecognitionDemo
+This example application uses a neural network trained on the standard MNIST dataset of 28x28 greyscale 0..255 pixel value images of hand drawn numbers 0..9. The application user interace allows the user to draw a number on the device screen which is then tested against the trained network. The output displays the most probable numeric values and the probability score. This tutorial will cover the use of a trained neural network in an Android Application, the handling of user generated images, and the output of the results to the UI from a background thread. For a detailed guide demonstrating how to train and save the neural networks used in this application, please see this DL4J quickstart [tutorial](https://deeplearning4j.org/quickstart). More information on general prerequisites for building DL4J Android Applications can be found [here](https://github.com/jrmerwin/Skymind-Android-Documentation/blob/master/Prereqs%20and%20Configuration%20for%20Android.md).
+
+
+## Setting the Dependencies
+Deeplearning4J applications requires application specific dependencies in the build.gradle file. The Deeplearning library in turn depends on the libraries of ND4J and OpenBLAS, thus these must also be added to the dependencies declaration. Starting with Android Studio 3.0, annotationProcessors need to be defined as well, thus dependencies for either -x86 or -arm processors should be included, depending on your device, if you are working in Android Studio 3.0 or later. Note that both can be include without conflict as is done in the example app.
+```java
+ compile 'com.android.support:appcompat-v7:27.0.2'
+ compile 'com.android.support:design:27.0.2'
+ compile 'org.deeplearning4j:deeplearning4j-nn:0.9.1'
+ compile 'org.nd4j:nd4j-native:0.9.1'
+ compile 'org.nd4j:nd4j-native:0.9.1:android-x86'
+ compile 'org.nd4j:nd4j-native:0.9.1:android-arm'
+ compile 'org.bytedeco:javacpp:1.4'
+ compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-x86'
+ compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-arm'
+ testCompile 'junit:junit:4.12'
+```
+Depending on the combination of dependencies, duplication conflicts can arise that must be handled with exclusions. After adding the above dependencies and the exclusions listed below, sync the Gradle file add additional exclusions if needed. The error message will identify the file path that should be added to the list of exclusions. An example error message with file path: **> More than one file was found with OS independent path 'org/bytedeco/javacpp/ windows-x86_64/msvp120.dll'**
+```java
+packagingOptions {
+
+ exclude 'META-INF/DEPENDENCIES'
+ exclude 'META-INF/DEPENDENCIES.txt'
+ exclude 'META-INF/LICENSE'
+ exclude 'META-INF/LICENSE.txt'
+ exclude 'META-INF/license.txt'
+ exclude 'META-INF/NOTICE'
+ exclude 'META-INF/NOTICE.txt'
+ exclude 'META-INF/notice.txt'
+ exclude 'META-INF/INDEX.LIST'
+
+ }
+```
+Compiling these dependencies involves a large number of files, thus it is necessary to set multiDexEnabled to true in defaultConfig.
+```java
+multiDexEnabled true
+```
+Finally, a conflict in the junit module versions will give the following error: > Conflict with dependency 'junit:junit' in project ':app'. Resolved versions for app (4.8.2) and test app (4.12) differ.
+This can be suppressed by forcing all of the junit modules to use the same version.
+```java
+configurations.all {
+ resolutionStrategy.force 'junit:junit:4.12'
+}
+```
+## Training and loading the Mnist model in the Android project resources
+
+Using a neural network requires a significant amount of processor power, which is in limited supply on mobile devices. Therefore, a background thread must be used for loading of the trained neural network and the testing of the user drawn image by using AsyncTask. In this application we will run the canvas.draw code on the main thread and use an AsyncTask to load the drawn image from internal memory and test it against the trained model on a background thread. First, lets look at how to save the trained neural network we will be using in the application.
+
+You will need to begin by following the DeepLearning4J quick start [guide](https://deeplearning4j.org/quickstart) to set up, train, and save neural network models on a desktop computer. The DL4J example which trains and saves the Mnist model used in this application is *MnistImagePipelineExampleSave.java* and is included in the quick start guide referenced above. The code for the Mnist demo is also available [here](https://gist.github.com/tomthetrainer/7cb2fbc14a5c631a567a98c3134f7dd6). Running this demo will train the Mnist neural network model and save it as *"trained_mnist_model.zip"* in the *dl4j\target folder* of the *dl4j-examples* directory. You can then copy the file and save it in the raw folder of your Android project.
+
+
+
+## Accessing the trained model using an AsyncTask
+
+Now let’s start by writing our AsyncTask<*Params*, *Progress*, *Results*> to load and use the neural network on a background thread. The AsyncTask will use the parameter types . The *Params* type is set to String, which will pass the Path for the saved image to the asyncTask as it is executed. This path will be used in the doInBackground() method to locate and load the trained Mnist model. The *Results* parameter is of type INDArray which will store the results from the neural network and pass it to the onPostExecute method that has access to the main thread for updating the UI. For more on NDArrays, see https://nd4j.org/userguide. Note that the AsyncTask requires that we override two more methods (the onProgressUpdate and onPostExecute methods) which we will get to later in the demo.
+```java
+private class AsyncTaskRunner extends AsyncTask {
+
+ // Runs in UI before background thread is called.
+ @Override
+ protected void onPreExecute() {
+ super.onPreExecute();
+ }
+
+ @Override
+ protected INDArray doInBackground(String... params) {
+ // Main background thread, this will load the model and test the input image
+ // The dimensions of the images are set here
+ int height = 28;
+ int width = 28;
+ int channels = 1;
+
+ //Now we load the model from the raw folder with a try / catch block
+ try {
+ // Load the pretrained network.
+ InputStream inputStream = getResources().openRawResource(R.raw.trained_mnist_model);
+ MultiLayerNetwork model = ModelSerializer.restoreMultiLayerNetwork(inputStream);
+
+ //load the image file to test
+ File f=new File(absolutePath, "drawn_image.jpg");
+
+ //Use the nativeImageLoader to convert to numerical matrix
+ NativeImageLoader loader = new NativeImageLoader(height, width, channels);
+
+ //put image into INDArray
+ INDArray image = loader.asMatrix(f);
+
+ //values need to be scaled
+ DataNormalization scalar = new ImagePreProcessingScaler(0, 1);
+
+ //then call that scalar on the image dataset
+ scalar.transform(image);
+
+ //pass through neural net and store it in output array
+ output = model.output(image);
+
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ return output;
+ }
+```
+
+## Handling images from user input
+
+Now lets add the code for the drawing canvas that will run on the main thread and allow the user to draw a number on the screen. This is a generic draw program written as an inner class within the MainActivity. It extends View and overrides a series of methods. The drawing is saved to internal memory and the AsyncTask is executed with the image Path passed to it in the onTouchEvent case statement for case *MotionEvent.ACTION_UP*. This has the streamline action of automatically returning results for an image after the user completes the drawing.
+```java
+//code for the drawing input
+ public class DrawingView extends View {
+
+ private Path mPath;
+ private Paint mBitmapPaint;
+ private Paint mPaint;
+ private Bitmap mBitmap;
+ private Canvas mCanvas;
+
+ public DrawingView(Context c) {
+ super(c);
+
+ mPath = new Path();
+ mBitmapPaint = new Paint(Paint.DITHER_FLAG);
+ mPaint = new Paint();
+ mPaint.setAntiAlias(true);
+ mPaint.setStrokeJoin(Paint.Join.ROUND);
+ mPaint.setStrokeCap(Paint.Cap.ROUND);
+ mPaint.setStrokeWidth(60);
+ mPaint.setDither(true);
+ mPaint.setColor(Color.WHITE);
+ mPaint.setStyle(Paint.Style.STROKE);
+ }
+
+ @Override
+ protected void onSizeChanged(int W, int H, int oldW, int oldH) {
+ super.onSizeChanged(W, H, oldW, oldH);
+ mBitmap = Bitmap.createBitmap(W, H, Bitmap.Config.ARGB_4444);
+ mCanvas = new Canvas(mBitmap);
+ }
+
+ @Override
+ protected void onDraw(Canvas canvas) {
+ canvas.drawBitmap(mBitmap, 0, 0, mBitmapPaint);
+ canvas.drawPath(mPath, mPaint);
+ }
+
+ private float mX, mY;
+ private static final float TOUCH_TOLERANCE = 4;
+
+ private void touch_start(float x, float y) {
+ mPath.reset();
+ mPath.moveTo(x, y);
+ mX = x;
+ mY = y;
+ }
+ private void touch_move(float x, float y) {
+ float dx = Math.abs(x - mX);
+ float dy = Math.abs(y - mY);
+ if (dx >= TOUCH_TOLERANCE || dy >= TOUCH_TOLERANCE) {
+ mPath.quadTo(mX, mY, (x + mX)/2, (y + mY)/2);
+ mX = x;
+ mY = y;
+ }
+ }
+ private void touch_up() {
+ mPath.lineTo(mX, mY);
+ mCanvas.drawPath(mPath, mPaint);
+ mPath.reset();
+ }
+
+ @Override
+ public boolean onTouchEvent(MotionEvent event) {
+ float x = event.getX();
+ float y = event.getY();
+
+ switch (event.getAction()) {
+ case MotionEvent.ACTION_DOWN:
+ invalidate();
+ clear();
+ touch_start(x, y);
+ invalidate();
+ break;
+ case MotionEvent.ACTION_MOVE:
+ touch_move(x, y);
+ invalidate();
+ break;
+ case MotionEvent.ACTION_UP:
+ touch_up();
+ absolutePath = saveDrawing();
+ invalidate();
+ clear();
+ loadImageFromStorage(absolutePath);
+ onProgressBar();
+ //launch the asyncTask now that the image has been saved
+ AsyncTaskRunner runner = new AsyncTaskRunner();
+ runner.execute(absolutePath);
+ break;
+
+ }
+ return true;
+ }
+
+ public void clear(){
+ mBitmap.eraseColor(Color.TRANSPARENT);
+ invalidate();
+ System.gc();
+ }
+
+ }
+
+```
+Now we need to build a series of helper methods. First we will write the saveDrawing() method. It uses getDrawingCache() to retrieve the drawing from the drawingView and store it as a bitmap. We then create a file directory and file for the bitmap called "drawn_image.jpg". Finally, FileOutputStream is used in a try / catch block to write the bitmap to the file location. The method returns the absolute Path to the file location which will be used by the loadImageFromStorage() method.
+```java
+public String saveDrawing(){
+ drawingView.setDrawingCacheEnabled(true);
+ Bitmap b = drawingView.getDrawingCache();
+
+ ContextWrapper cw = new ContextWrapper(getApplicationContext());
+ // set the path to storage
+ File directory = cw.getDir("imageDir", Context.MODE_PRIVATE);
+ // Create imageDir and store the file there. Each new drawing will overwrite the previous
+ File mypath=new File(directory,"drawn_image.jpg");
+
+ //use a fileOutputStream to write the file to the location in a try / catch block
+ FileOutputStream fos = null;
+ try {
+ fos = new FileOutputStream(mypath);
+ b.compress(Bitmap.CompressFormat.JPEG, 100, fos);
+ } catch (Exception e) {
+ e.printStackTrace();
+ } finally {
+ try {
+ fos.close();
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+ return directory.getAbsolutePath();
+ }
+```
+Next we will write the loadImageFromStorage method which will use the absolute path returned from saveDrawing() to load the saved image and display it in the UI as part of the output display. It uses a try / catch block and a FileInputStream to set the image to the ImageView *img* in the UI layout.
+```java
+ private void loadImageFromStorage(String path)
+ {
+
+ //use a fileInputStream to read the file in a try / catch block
+ try {
+ File f=new File(path, "drawn_image.jpg");
+ Bitmap b = BitmapFactory.decodeStream(new FileInputStream(f));
+ ImageView img=(ImageView)findViewById(R.id.outputView);
+ img.setImageBitmap(b);
+ }
+ catch (FileNotFoundException e)
+ {
+ e.printStackTrace();
+ }
+
+ }
+```
+We also need to write two methods that extract the predicted number from the neural network output and the confidence score, which we will call later when we complete the AsyncTask.
+```java
+//helper class to return the largest value in the output array
+ public static double arrayMaximum(double[] arr) {
+ double max = Double.NEGATIVE_INFINITY;
+ for(double cur: arr)
+ max = Math.max(max, cur);
+ return max;
+ }
+
+ // helper class to find the index (and therefore numerical value) of the largest confidence score
+ public int getIndexOfLargestValue( double[] array )
+ {
+ if ( array == null || array.length == 0 ) return -1;
+ int largest = 0;
+ for ( int i = 1; i < array.length; i++ )
+ {if ( array[i] > array[largest] ) largest = i; }
+ return largest;
+ }
+```
+Finally, we need a few methods we can call to control the visibility of an 'In Progress...' message while the background thread is running. These will be called when the AsyncTask is executed and in the onPostExecute method when the background thread completes.
+```java
+ public void onProgressBar(){
+ TextView bar = findViewById(R.id.processing);
+ bar.setVisibility(View.VISIBLE);
+ }
+
+ public void offProgressBar(){
+ TextView bar = findViewById(R.id.processing);
+ bar.setVisibility(View.INVISIBLE);
+ }
+```
+Now let's go to the onCreate method to initialize the draw canvas and set some global variables.
+```java
+public class MainActivity extends AppCompatActivity {
+
+ MainActivity.DrawingView drawingView;
+ String absolutePath;
+ public static INDArray output;
+
+ @Override
+ public void onCreate(Bundle savedInstanceState) {
+ super.onCreate(savedInstanceState);
+ setContentView(R.layout.activity_main);
+
+ RelativeLayout parent = findViewById(R.id.layout2);
+ drawingView = new MainActivity.DrawingView(this);
+ parent.addView(drawingView);
+ }
+```
+
+## Updating the UI
+
+Now we can complete our AsyncTask by overriding the onProgress and onPostExecute methods. Once the doInBackground method of AsyncTask completes, the classification results will be passed to the onPostExecute which has access to the main thread and UI allowing us to update the UI with the results. Since we will not be using the onProgress method, a call to its superclass will suffice.
+
+```java
+@Override
+ protected void onProgressUpdate(Integer... values) {
+ super.onProgressUpdate(values);
+ }
+```
+The onPostExecute method will receive an INDArray which contains the neural network results as a 1x10 array of probability values that the input drawing is each possible digit (0..9). From this we need to determine which row of the array contains the largest value and what the size of that value is. These two values will determine which number the neural network has classified the drawing as and how confident the network score is. These values will be referred to in the UI as *Prediction* and the *Confidence*, respectively. In the code below, the individual values for each position of the INDArray are passed to an array of type double using the getDouble() method on the result INDArray. We then get references to the TextViews which will be updated in the UI and call our helper methods on the array to return the array maximum (confidence) and index of the largest value (prediction). Note we also need to limit the number of decimal places reported on the probabilities by setting a DecimalFormat pattern.
+```java
+
+ @Override
+ protected void onPostExecute(INDArray result) {
+ super.onPostExecute(result);
+
+ //used to control the number of decimals places for the output probability
+ DecimalFormat df2 = new DecimalFormat(".##");
+
+ //transfer the neural network output to an array
+ double[] results = {result.getDouble(0,0),result.getDouble(0,1),result.getDouble(0,2),
+ result.getDouble(0,3),result.getDouble(0,4),result.getDouble(0,5),result.getDouble(0,6),
+ result.getDouble(0,7),result.getDouble(0,8),result.getDouble(0,9),};
+
+ //find the UI tvs to display the prediction and confidence values
+ TextView out1 = findViewById(R.id.prediction);
+ TextView out2 = findViewById(R.id.confidence);
+
+ //display the values using helper functions defined below
+ out2.setText(String.valueOf(df2.format(arrayMaximum(results))));
+ out1.setText(String.valueOf(getIndexOfLargestValue(results)));
+
+ //helper function to turn off progress test
+ offProgressBar();
+ }
+```
+
+## Conclusion
+
+This tutorial provides a basic framework for image recognition in an Android Application using a DL4J neural network. It illustrates how to load a pre-trained DL4J model from the raw resources file and how to test user generate input images against the model. The AsyncTask then returns the output to the main thread and updates the UI.
+
+The complete code for this example is available at https://github.com/jrmerwin/DL4JImageRecognitionDemo
+
+
+
diff --git a/android-DL4JIrisClassifierDemo.md b/android-DL4JIrisClassifierDemo.md
new file mode 100644
index 000000000..0487ab9f3
--- /dev/null
+++ b/android-DL4JIrisClassifierDemo.md
@@ -0,0 +1,264 @@
+---
+title: Using Deeplearning4J in Android Applications - demo 1
+layout: default
+---
+# Using Deeplearning4J in Android Applications
+### DL4JIrisClassifierDemo
+The example application trains a small neural network on the device using Anderson’s Iris data set for iris flower type classification. For a more indepth look at optimizing android for DL4J, please see the Prerequisites and Configuration documentation [here](https://github.com/jrmerwin/Skymind-Android-Documentation/blob/master/Prereqs%20and%20Configuration%20for%20Android.md). This application has a simple UI to take measurements of petal length, petal width, sepal length, and sepal width from the user and returns the probability that the measurements belong to one of three types of Iris (*Iris serosa*, *Iris versicolor*, and *Iris virginica*). A data set includes 150 measurement values (50 for each iris type) and training the model takes anywhere from 5-20 seconds, depending on the device.
+
+Contents
+
+* [Setting the Dependencies](#head_link1)
+* [Setting up the neural network on a background thread](#head_link2)
+* [Preparing the training data set and user input](#head_link3)
+* [Building and Training the Neural Network](#head_link4)
+* [Updating the UI](#head_link5)
+* [Conclusion](#head_link6)
+
+## DL4JIrisClassifierDemo
+
+## Setting the Dependencies
+Deeplearning4J applications require several dependencies in the build.gradle file. The Deeplearning library in turn depends on the libraries of ND4J and OpenBLAS, thus these must also be added to the dependencies declaration. Starting with Android Studio 3.0, annotationProcessors need to be defined as well, requiring dependencies for -x86 or -arm processors.
+```java
+ compile 'com.android.support:appcompat-v7:27.0.2'
+ compile 'com.android.support:design:27.0.2'
+ compile 'org.deeplearning4j:deeplearning4j-nn:0.9.1'
+ compile 'org.nd4j:nd4j-native:0.9.1'
+ compile 'org.nd4j:nd4j-native:0.9.1:android-x86'
+ compile 'org.nd4j:nd4j-native:0.9.1:android-arm'
+ compile 'org.bytedeco:javacpp:1.4'
+ compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-x86'
+ compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-arm'
+ testCompile 'junit:junit:4.12'
+```
+The DL4J and ND4J libraries contain several identically named files which requires exclusion statements in the packagingOptions. After added the above dependencies to the build.gradle file, try syncing Gradle with the below exclusions and add additional exclusions if needed. The error message will identify the file path that should be added to the list of exclusions. An example error message with file path: **> More than one file was found with OS independent path 'org/bytedeco/javacpp/ windows-x86_64/msvp120.dll'**
+```java
+packagingOptions {
+
+ exclude 'META-INF/DEPENDENCIES'
+ exclude 'META-INF/DEPENDENCIES.txt'
+ exclude 'META-INF/LICENSE'
+ exclude 'META-INF/LICENSE.txt'
+ exclude 'META-INF/license.txt'
+ exclude 'META-INF/NOTICE'
+ exclude 'META-INF/NOTICE.txt'
+ exclude 'META-INF/notice.txt'
+ exclude 'META-INF/INDEX.LIST'
+
+```
+Compiling these dependencies involves a large number of files, thus it is necessary to set multiDexEnabled to true in defaultConfig.
+```java
+multiDexEnabled true
+```
+
+Finally, a conflict in the junit module versions will likely throw the following error: > Conflict with dependency 'junit:junit' in project ':app'. Resolved versions for app (4.8.2) and test app (4.12) differ.
+This can be suppressed by forcing all of the junit modules to use the same version.
+```java
+configurations.all {
+ resolutionStrategy.force 'junit:junit:4.12'
+}
+```
+## Setting up the neural network on a background thread
+
+Training even a simple neural network like in this example requires a significant amount of processor power, which is in limited supply on mobile devices. Thus, it is imperative that a background thread be used for the building and training of the neural network which then returns the output to the main thread for updating the UI. In this example we will be using an AsyncTask which accepts the input measurements from the UI and passes them as type double to the doInBackground() method. First, lets get references to the editTexts in the UI layout that accept the iris measurements inside of our onCreate method. Then an onClickListener will execute our asyncTask, pass it the measurements entered by the user, and show a progress bar until we hide it again in onPostExecute().
+```java
+public class MainActivity extends AppCompatActivity {
+
+
+@Override
+ public void onCreate(Bundle savedInstanceState) {
+ super.onCreate(savedInstanceState);
+ setContentView(R.layout.activity_main);
+
+ //get references to the editTexts that take the measurements
+ final EditText PL = (EditText) findViewById(R.id.editText);
+ final EditText PW = (EditText) findViewById(R.id.editText2);
+ final EditText SL = (EditText) findViewById(R.id.editText3);
+ final EditText SW = (EditText) findViewById(R.id.editText4);
+
+ //onclick to capture the input and launch the asyncTask
+ Button button = (Button) findViewById(R.id.button);
+
+ button.setOnClickListener(new View.OnClickListener() {
+ @Override
+ public void onClick(View v) {
+
+ final double pl = Double.parseDouble(PL.getText().toString());
+ final double pw = Double.parseDouble(PW.getText().toString());
+ final double sl = Double.parseDouble(SL.getText().toString());
+ final double sw = Double.parseDouble(SW.getText().toString());
+
+ AsyncTaskRunner runner = new AsyncTaskRunner();
+
+ //pass the measurement as params to the AsyncTask
+ runner.execute(pl,pw,sl,sw);
+
+ ProgressBar bar = (ProgressBar) findViewById(R.id.progressBar);
+ bar.setVisibility(View.VISIBLE);
+ }
+ });
+ }
+```
+Now let’s write our AsyncTask<*Params*, *Progress*, *Results*>. The AsyncTask needs to have a *Params* of type Double to receive the decimal value measurements from the UI. The *Result* type is set to INDArray, which is returned from the doInBackground() Method and passed to the onPostExecute() method for updating the UI. NDArrays are provided by the ND4J library and are essentially n-dimensional arrays with a given number of dimensions. For more on NDArrays, see https://nd4j.org/userguide.
+```java
+private class AsyncTaskRunner extends AsyncTask {
+
+ // Runs in UI before background thread is called
+ @Override
+ protected void onPreExecute() {
+ super.onPreExecute();
+
+ ProgressBar bar = (ProgressBar) findViewById(R.id.progressBar);
+ bar.setVisibility(View.INVISIBLE);
+ }
+```
+## Preparing the training data set and user input
+
+The doInBackground() method will handle the formatting of the training data, the construction of the neural net, the training of the net, and the analysis of the input data by the trained model. The user input has only 4 values, thus we can add those directly to a 1x4 INDArray using the putScalar() method. The training data is much larger and must be converted from CSV lists to matrices through an iterative *for* loop.
+
+The training data is stored in the app as two arrays, one for the Iris measurements named *irisData* which contains a list of 150 iris measurements and another for the labels of iris type named *labelData*. These will be transformed to 150x4 and 150x3 matrices, respectively, so that they can be converted into INDArray objects that the neural network will use for training.
+```java
+/ This is our main background thread for the neural net
+ @Override
+ protected String doInBackground(Double... params) {
+ //Get the doubles from params, which is an array so they will be 0,1,2,3
+ double pld = params[0];
+ double pwd = params[1];
+ double sld = params[2];
+ double swd = params[3];
+
+ //Create input INDArray for the user measurements
+ INDArray actualInput = Nd4j.zeros(1,4);
+ actualInput.putScalar(new int[]{0,0}, pld);
+ actualInput.putScalar(new int[]{0,1}, pwd);
+ actualInput.putScalar(new int[]{0,2}, sld);
+ actualInput.putScalar(new int[]{0,3}, swd);
+
+ //Convert the iris data into 150x4 matrix
+ int row=150;
+ int col=4;
+ double[][] irisMatrix=new double[row][col];
+ int i = 0;
+ for(int r=0; rBuilding and Training the Neural Network
+
+Now that our data is ready, we can build a simple multi-layer perceptron with a single hidden layer. The *DenseLayer* class is used to create the input layer and the hidden layer of the network while the *OutputLayer* class is used for the Output layer. The number of columns in the input INDArray must equal to the number of neurons in the input layer (nIn). The number of neurons in the hidden layer input must equal the number inputLayer’s output array (nOut). Finally, the outputLayer input should match the hiddenLayer output. The output must equal the number of possible classifications, which is 3.
+```java
+//define the layers of the network
+ DenseLayer inputLayer = new DenseLayer.Builder()
+ .nIn(4)
+ .nOut(3)
+ .name("Input")
+ .build();
+
+ DenseLayer hiddenLayer = new DenseLayer.Builder()
+ .nIn(3)
+ .nOut(3)
+ .name("Hidden")
+ .build();
+
+ OutputLayer outputLayer = new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
+ .nIn(3)
+ .nOut(3)
+ .name("Output")
+ .activation(Activation.SOFTMAX)
+ .build();
+```
+The next step is to build the neural network using *nccBuilder*. The parameters selected below for training are standard. To learn more about optimizing network training, see deeplearning4j.org.
+```java
+ NeuralNetConfiguration.Builder nncBuilder = new NeuralNetConfiguration.Builder();
+ long seed = 6;
+ nncBuilder.seed(seed);
+ nncBuilder.iterations(1000);
+ nncBuilder.learningRate(0.1);
+ nncBuilder.activation(Activation.TANH);
+ nncBuilder.weightInit(WeightInit.XAVIER);
+ nncBuilder.regularization(true).l2(1e-4);
+
+ NeuralNetConfiguration.ListBuilder listBuilder = nncBuilder.list();
+ listBuilder.layer(0, inputLayer);
+ listBuilder.layer(1, hiddenLayer);
+ listBuilder.layer(2, outputLayer);
+
+ listBuilder.backprop(true);
+
+ MultiLayerNetwork myNetwork = new MultiLayerNetwork(listBuilder.build());
+ myNetwork.init();
+
+ //Create a data set from the INDArrays and train the network
+ DataSet myData = new DataSet(trainingIn, trainingOut);
+ myNetwork.fit(myData);
+
+ //Evaluate the input data against the model
+ INDArray actualOutput = myNetwork.output(actualInput);
+ Log.d("myNetwork Output ", actualOutput.toString());
+
+ //Here we return the INDArray to onPostExecute where it can be
+ //used to update the UI
+ return actualOutput;
+ }
+ }
+```
+## Updating the UI
+
+Once the training of the neural network and the classification of the user measurements are complete, the doInBackground() method will finish and onPostExecute() will have access to the main thread and UI, allowing us to update the UI with the classification results. Note that the decimal places reported on the probabilities can be controlled by setting a DecimalFormat pattern.
+```java
+//This is where we update the UI with our classification results
+ @Override
+ protected void onPostExecute(INDArray result) {
+ super.onPostExecute(result);
+
+ //Hide the progress bar now that we are finished
+ ProgressBar bar = (ProgressBar) findViewById(R.id.progressBar);
+ bar.setVisibility(View.INVISIBLE);
+
+ //Retrieve the three probabilities
+ Double first = result.getDouble(0,0);
+ Double second = result.getDouble(0,1);
+ Double third = result.getDouble(0,2);
+
+ //Update the UI with output
+ TextView setosa = (TextView) findViewById(R.id.textView11);
+ TextView versicolor = (TextView) findViewById(R.id.textView12);
+ TextView virginica = (TextView) findViewById(R.id.textView13);
+
+ //Limit the double to values to two decimals using DecimalFormat
+ DecimalFormat df2 = new DecimalFormat(".##");
+
+ //Set the text of the textViews in UI to show the probabilites
+ setosa.setText(String.valueOf(df2.format(first)));
+ versicolor.setText(String.valueOf(df2.format(second)));
+ virginica.setText(String.valueOf(df2.format(third)));
+
+ }
+```
+
+## Conclusion
+
+Hopefully this tutorial has illustrated how the compatibility of DL4J with Android makes it easy to build, train, and evaluate neural networks on mobile devices. We used a simple UI to take input values from the measurement and then passed them as the *Params* in an AsyncTask. The processor intensive steps of data preparation, network layer building, model training, and evaluation of the user data were all performed in the doInBackground() method of the background thread, maintaining a stable and responsive device. Once completed, we passed the output INDArray as the AsyncTask *Results* to onPostExecute() where the the UI was updated to demonstrate the classification results.
+The limitations of processing power and battery life of mobile devices make training robust, multi-layer networks somewhat unfeasible. To address this limitation, we will next look at an example Android application that saves the trained model on the device for faster performance after an initial model training.
+
+The complete code for this example is available at https://github.com/jrmerwin/DL4JIrisClassifierDemo
+
+
+
diff --git a/android-Prereqs and Configuration.md b/android-Prereqs and Configuration.md
new file mode 100644
index 000000000..3ca1b7a61
--- /dev/null
+++ b/android-Prereqs and Configuration.md
@@ -0,0 +1,340 @@
+---
+title: Using Deeplearning4J in Android Applications
+layout: default
+---
+# Prerequisites and Configurations for DL4J in Android
+Contents
+* [Prerequisites](#head_link1)
+* [Required Dependencies](#head_link2)
+* [Managing Dependencies with ProGuard](#head_link3)
+* [Memory Management](#head_link4)
+* [Saving and Loading Networks on Android](#head_link5)
+
+While neural networks are typically run on powerful computers using multiple GPUs, the compatibility of Deeplearning4J with the Android platform makes using DL4J neural networks in android applications a possibility. This tutorial will cover the basics of setting up android studio for building DL4J applications. Several configurations for dependencies, memory management, and compilation exclusions needed to mitigate the limitations of low powered mobile device are outlined below. If you just want to get a DL4J app running on your device, you can jump ahead to a simple demo application which trains a neural network for Iris flower classification available [here](https://github.com/jrmerwin/DL4JIrisClassifierDemo).
+## Prerequisites
+* Android Studio 2.2 or newer, which can be downloaded [here](https://developer.android.com/studio/index.html#Other).
+* Android Studio version 2.2 and higher comes with the latest OpenJDK embedded; however, it is recommended to have the JDK installed on your own as you are then able to update it independent of Android Studio. Android Studio 3.0 and later supports all of Java 7 and a subset of Java 8 language features. Java JDKs can be downloaded from Oracle's website.
+* Within Android studio, the Android SDK Manager can be used to install Android Build tools 24.0.1 or later, SDK platform 24 or later, and the Android Support Repository.
+* An Android device or an emulator running API level 21 or higher. A minimum of 200 MB of internal storage space free is recommended.
+It is also recommended that you download and install IntelliJ IDEA, Maven, and the complete dl4j-examples directory for building and building and training neural nets on your desktop instead of android studio. A quickstart guide for setting up DL4j projects can be found [here](https://deeplearning4j.org/quickstart).
+## Required Dependencies
+In order to use Deeplearning4J in your Android projects, you will need to add the following dependencies to your app module’s build.gradle file. Depending on the type of neural network used in your application, you may need to add additional dependencies.
+``` java
+compile 'org.deeplearning4j:deeplearning4j-nn:0.9.1'
+compile 'org.nd4j:nd4j-native:0.9.1'
+compile 'org.nd4j:nd4j-native:0.9.1:android-x86'
+compile 'org.nd4j:nd4j-native:0.9.1:android-arm'
+compile 'org.bytedeco:javacpp:1.4'
+compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-x86'
+compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-arm'
+testCompile 'junit:junit:4.12'
+```
+DL4J depends on ND4J, which is a library that offers fast n-dimensional arrays. ND4J in turn depends on a platform-specific native code library called JavaCPP, therefore you must load a version of ND4J that matches the architecture of the Android device. Both -x86 and -arm types can be included to support multiple device processor types.
+
+The above dependencies contain several files with identical names which must be handled with the following exclude parameters to your packagingOptions.
+```java
+packagingOptions {
+ exclude 'META-INF/DEPENDENCIES'
+ exclude 'META-INF/DEPENDENCIES.txt'
+ exclude 'META-INF/LICENSE'
+ exclude 'META-INF/LICENSE.txt'
+ exclude 'META-INF/license.txt'
+ exclude 'META-INF/NOTICE'
+ exclude 'META-INF/NOTICE.txt'
+ exclude 'META-INF/notice.txt'
+ exclude 'META-INF/INDEX.LIST'
+
+ }
+ ```
+After adding the above dependencies and exclusions to the build.gradle file, try syncing Gradle with to see if any other exclusions are needed. The error message will identify the file path that should be added to the list of exclusions. An example error message with file path is: *> More than one file was found with OS independent path 'org/bytedeco/javacpp/ windows-x86_64/msvp120.dll'*
+Compiling these dependencies involves a large number of files, thus it is necessary to set multiDexEnabled to true in defaultConfig.
+```java
+multiDexEnabled true
+```
+A conflict in the junit module versions often causes the following error: *> Conflict with dependency 'junit:junit' in project ':app'. Resolved versions for app (4.8.2) and test app (4.12) differ*. This can be suppressed by forcing all of the junit modules to use the same version with the following:
+``` java
+configurations.all {
+ resolutionStrategy.force 'junit:junit:4.12'
+}
+```
+## Managing Dependencies with ProGuard
+The DL4J dependencies compile a large number of files. ProGuard can be used to minimize your APK file size. ProGuard detects and removes unused classes, fields, methods, and attributes from your packaged app, including those from code libraries. You can learn more about using Proguard [here](https://developer.android.com/studio/build/shrink-code.html).
+To enable code shrinking with ProGuard, add minifyEnabled true to the appropriate build type in your build.gradle file.
+```java
+buildTypes {
+ release {
+ minifyEnabled true
+ proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
+ }
+}
+```
+It is recommended to upgrade your ProGuard in the Android SDK to the latest release (5.1 or higher). Note that upgrading the build tools or other aspects of your SDK might cause Proguard to reset to the version shipped with the SDK. In order to force ProGuard to use a version of other than the Android Gradle default, you can include this in the buildscript of `build.gradle` file:
+``` java
+buildscript {
+ configurations.all {
+ resolutionStrategy {
+ force 'net.sf.proguard:proguard-gradle:5.3.2'
+ }
+ }
+}
+```
+Proguard optimizes and reduces the amount of code in your Android application in order to make if smaller and faster. Unfortunately, proguard removes annotations by default, including the @Platform annotation used by javaCV. To make proguard preserve these annotations and keep native methods add the following flags to the progaurd-rules.pro file.
+``` java
+# enable optimization
+-optimizations !code/simplification/arithmetic,!code/simplification/cast,!field/*,!class/merging/*
+-optimizationpasses 5
+-allowaccessmodification
+-dontwarn org.apache.lang.**
+-ignorewarnings
+
+-keepattributes *Annotation*
+# JavaCV
+-keep @org.bytedeco.javacpp.annotation interface * {*;}
+-keep @org.bytedeco.javacpp.annotation.Platform public class *
+-keepclasseswithmembernames class * {@org.bytedeco.* ;}
+-keepclasseswithmembernames class * {@org.bytedeco.* ;}
+
+-keepattributes EnclosingMethod
+-keep @interface org.bytedeco.javacpp.annotation.*,javax.inject.*
+
+-keepattributes *Annotation*, Exceptions, Signature, Deprecated, SourceFile, SourceDir, LineNumberTable, LocalVariableTable, LocalVariableTypeTable, Synthetic, EnclosingMethod, RuntimeVisibleAnnotations, RuntimeInvisibleAnnotations, RuntimeVisibleParameterAnnotations, RuntimeInvisibleParameterAnnotations, AnnotationDefault, InnerClasses
+-keep class org.bytedeco.javacpp.** {*;}
+-dontwarn java.awt.**
+-dontwarn org.bytedeco.javacv.**
+-dontwarn org.bytedeco.javacpp.**
+# end javacv
+
+# This flag is needed to keep native methods
+-keepclasseswithmembernames class * {
+ native ;
+}
+
+-keep public class * extends android.view.View {
+ public (android.content.Context);
+ public (android.content.Context, android.util.AttributeSet);
+ public (android.content.Context, android.util.AttributeSet, int);
+ public void set*(...);
+}
+
+-keepclasseswithmembers class * {
+ public (android.content.Context, android.util.AttributeSet);
+}
+
+-keepclasseswithmembers class * {
+ public (android.content.Context, android.util.AttributeSet, int);
+}
+
+-keepclassmembers class * extends android.app.Activity {
+ public void *(android.view.View);
+}
+
+# For enumeration classes
+-keepclassmembers enum * {
+ public static **[] values();
+ public static ** valueOf(java.lang.String);
+}
+
+-keep class * implements android.os.Parcelable {
+ public static final android.os.Parcelable$Creator *;
+}
+
+-keepclassmembers class **.R$* {
+ public static ;
+}
+
+-keep class android.support.v7.app.** { *; }
+-keep interface android.support.v7.app.** { *; }
+-keep class com.actionbarsherlock.** { *; }
+-keep interface com.actionbarsherlock.** { *; }
+-dontwarn android.support.**
+-dontwarn com.google.ads.**
+
+# Flags to keep standard classes
+-keep public class * extends android.app.Activity
+-keep public class * extends android.app.Application
+-keep public class * extends android.app.Service
+-keep public class * extends android.content.BroadcastReceiver
+-keep public class * extends android.content.ContentProvider
+-keep public class * extends android.app.backup.BackupAgent
+-keep public class * extends android.preference.Preference
+-keep public class * extends android.support.v7.app.Fragment
+-keep public class * extends android.support.v7.app.DialogFragment
+-keep public class * extends com.actionbarsherlock.app.SherlockListFragment
+-keep public class * extends com.actionbarsherlock.app.SherlockFragment
+-keep public class * extends com.actionbarsherlock.app.SherlockFragmentActivity
+-keep public class * extends android.app.Fragment
+-keep public class com.android.vending.licensing.ILicensingService
+```
+Testing your app is the best way to check if any errors are being caused by inappropriately removed code; however, you can also inspect what was removed by reviewing the usage.txt output file saved in /build/outputs/mapping/release/.
+
+To fix errors and force ProGuard to retain certain code, add a -keep line in the ProGuard configuration file. For example:
+``` java
+-keep public class MyClass
+```
+## Memory Management
+It may also be advantageous to increase the allocated memory to your app by adding android:largeHeap="true" to the manifest file. Allocating a larger heap means that you decrease the risk of throwing an OutOfMemoryError during memory intensive operations.
+``` xml
+android:largeHeap="true"
+```
+As of release 0.9.0, ND4J offers an additional memory-management model: workspaces. Workspaces allow you to reuse memory for cyclic workloads without the JVM Garbage Collector for off-heap memory tracking. D4j Workspace allows for memory to be preallocated before a try / catch block and reused over in over within that block.
+
+If your training process uses workspaces, it is recommended that you disable or reduce the frequency of periodic GC calls prior to your model.fit() call.
+``` java
+// this will limit frequency of gc calls to 5000 milliseconds
+Nd4j.getMemoryManager().setAutoGcWindow(5000)
+
+// this will totally disable it
+Nd4j.getMemoryManager().togglePeriodicGc(false);
+```
+The example below illustrates the use of a Workspace for memory allocation in the AsyncTask of and Android Application. More information concerning ND4J Workspaces can be found [here](https://deeplearning4j.org/workspaces).
+```java
+import org.nd4j.linalg.api.memory.MemoryWorkspace;
+import org.nd4j.linalg.api.memory.conf.WorkspaceConfiguration;
+import org.nd4j.linalg.api.memory.enums.AllocationPolicy;
+import org.nd4j.linalg.api.memory.enums.LearningPolicy;
+
+
+private class AsyncTaskRunner extends AsyncTask {
+
+ // Runs in UI before background thread is called
+ @Override
+ protected void onPreExecute() {
+ super.onPreExecute();
+ }
+
+ //Runs on background thread, this is where we will initiate the Workspace
+ protected INDArray doInBackground(String... params) {
+
+ // we will create configuration with 10MB memory space preallocated
+ WorkspaceConfiguration initialConfig = WorkspaceConfiguration.builder()
+ .initialSize(10 * 1024L * 1024L)
+ .policyAllocation(AllocationPolicy.STRICT)
+ .policyLearning(LearningPolicy.NONE)
+ .build();
+
+ INDArray result = null;
+
+ try(MemoryWorkspace ws = Nd4j.getWorkspaceManager().getAndActivateWorkspace(initialConfig, "SOME_ID")) {
+ // now, INDArrays created within this try block will be allocated from this workspace pool
+
+ //Load a trained model
+ File file = new File(Environment.getExternalStorageDirectory() + "/trained_model.zip");
+ MultiLayerNetwork restored = ModelSerializer.restoreMultiLayerNetwork(file);
+
+ // Create input in INDArray
+ INDArray inputData = Nd4j.zeros(1, 4);
+
+ inputData.putScalar(new int[]{0, 0}, 1);
+ inputData.putScalar(new int[]{0, 1}, 0);
+ inputData.putScalar(new int[]{0, 2}, 1);
+ inputData.putScalar(new int[]{0, 3}, 0);
+
+ result = restored.output(inputData);
+
+ }
+ catch(IOException ex){Log.d("AsyncTaskRunner2 ", "catchIOException = " + ex );}
+
+ return result;
+ }
+
+ protected void onProgressUpdate(Integer... values) {
+ super.onProgressUpdate(values);
+ }
+
+ protected void onPostExecute(INDArray result) {
+ super.onPostExecute(result);
+ //Handle results and update UI here.
+ }
+
+ }
+```
+
+## Saving and Loading Networks on Android
+Practical considerations regarding performance limits are needed when building Android applications that run neural networks. Training a neural network on a device is possible, but should only be attempted with networks with limited numbers of layers, nodes, and iterations. The first Demo app [DL4JIrisClassifierDemo](https://github.com/jrmerwin/DL4JIrisClassifierDemo) is able to train on a standard device in about 15 seconds.
+
+When training on a device is a reasonable option, the application performance can be improved by saving the trained model on the phone's external storage once an initial training is complete. The trained model can then be used as an application resource. This approach is useful for training networks with data obtained from user input. The following code illustrates how to train a network and save it on the phone's external resources.
+
+For API 23 and greater, you will need to include the permissions in your manifest and also programmatically request the read and write permissions in your activity. The required Manifest permissions are:
+``` xml
+
+
+
+ ...
+```
+You need to implement ActivityCompat.OnRequestPermissionsResultCallback in the activity and then check for permission status.
+``` java
+public class MainActivity extends AppCompatActivity
+ implements ActivityCompat.OnRequestPermissionsResultCallback {
+
+ private static final int REQUEST_EXTERNAL_STORAGE = 1;
+ private static String[] PERMISSIONS_STORAGE = {
+ Manifest.permission.READ_EXTERNAL_STORAGE,
+ Manifest.permission.WRITE_EXTERNAL_STORAGE
+ };
+
+ @Override
+ protected void onCreate(Bundle savedInstanceState) {
+ super.onCreate(savedInstanceState);
+ setContentView(R.layout.activity_main);
+
+ verifyStoragePermission(MainActivity.this);
+ //…
+ }
+
+ public static void verifyStoragePermission(Activity activity) {
+ // Get permission status
+ int permission = ActivityCompat.checkSelfPermission(activity, Manifest.permission.WRITE_EXTERNAL_STORAGE);
+ if (permission != PackageManager.PERMISSION_GRANTED) {
+ // We don't have permission we request it
+ ActivityCompat.requestPermissions(
+ activity,
+ PERMISSIONS_STORAGE,
+ REQUEST_EXTERNAL_STORAGE
+ );
+ }
+ }
+```
+To save a network after training on the device use a OutputStream within a try catch block.
+``` java
+try {
+ File file = new File(Environment.getExternalStorageDirectory() + "/trained_model.zip");
+ OutputStream outputStream = new FileOutputStream(file);
+ boolean saveUpdater = true;
+ ModelSerializer.writeModel(myNetwork, outputStream, saveUpdater);
+
+} catch (Exception e) {
+ Log.e("saveToExternalStorage error", e.getMessage());
+}
+```
+To load the trained network from storage you can use the restoreMultiLayerNetwork method.
+``` java
+try{
+ //Load the model
+ File file = new File(Environment.getExternalStorageDirectory() + "/trained_model.zip");
+ MultiLayerNetwork restored = ModelSerializer.restoreMultiLayerNetwork(file);
+
+} catch (Exception e) {
+ Log.e("Load from External Storage error", e.getMessage());
+}
+```
+For larger or more complex neural networks like Convolutional or Recurrent Neural Networks, training on the device is not a realistic option as long processing times during network training run the risk of generating an OutOfMemoryError and make for a poor user experience. As an alternative, the Neural Network can be trained on the desktop, saved via ModelSerializer, and then loaded as a pre-trained model in the application. Using a pre-trained model in you Android application can be achieved with the following steps:
+* Train the yourModel on desktop and save via modelSerializer.
+* Create a raw resource folder in the res directory of the application.
+* Copy yourModel.zip file into the raw folder.
+* Access it from your resources using an inputStream within a try / catch block.
+``` java
+try {
+// Load name of model file (yourModel.zip).
+ InputStream is = getResources().openRawResource(R.raw.yourModel);
+
+// Load yourModel.zip.
+ MultiLayerNetwork restored = ModelSerializer.restoreMultiLayerNetwork(is);
+
+// Use yourModel.
+ INDArray results = restored.output(input)
+ System.out.println("Results: "+ results );
+// Handle the exception error
+} catch(IOException e) {
+ e.printStackTrace();
+ }
+```