Skip to content

Commit 7257b6a

Browse files
committed
docs: Preparing release notes for v1.5.3 [WIP]
Signed-off-by: mandar2812 <mandar2812@gmail.com>
1 parent c11edff commit 7257b6a

File tree

3 files changed

+130
-1
lines changed

3 files changed

+130
-1
lines changed

Diff for: docs/images/conv-fact.png

27.4 KB
Loading

Diff for: docs/images/conv-fact2.png

57.6 KB
Loading

Diff for: docs/releases/mydoc_release_notes_153.md

+130-1
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,116 @@
66

77
### Tensorflow Integration
88

9-
109
**Package** `dynaml.tensorflow`
1110

11+
**Training Stopping Criteria**
12+
13+
Create common and simple training stop criteria such as.
14+
15+
- Stop after fixed number of iterations `dtflearn.max_iter_stop(100000)`
16+
- Stop after change in value of loss goes below a threshold. `dtflearn.abs_loss_change_stop(0.0001)`
17+
- Stop after change in relative value of loss goes below a threshold. `dtflearn.rel_loss_change_stop(0.001)`
18+
19+
20+
**Neural Network Building Blocks**
21+
22+
- Added helper method ```dtlearn.build_tf_model()``` for training tensorflow models/estimators.
23+
24+
**Usage**
25+
26+
```scala
27+
28+
import io.github.mandar2812.dynaml.tensorflow._
29+
import org.platanios.tensorflow.api._
30+
import org.platanios.tensorflow.data.image.MNISTLoader
31+
import ammonite.ops._
32+
33+
val tempdir = home/"tmp"
34+
35+
val dataSet = MNISTLoader.load(java.nio.file.Paths.get(tempdir.toString()))
36+
val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages)
37+
val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels)
38+
val trainData =
39+
trainImages.zip(trainLabels)
40+
.repeat()
41+
.shuffle(10000)
42+
.batch(256)
43+
.prefetch(10)
44+
45+
// Create the MLP model.
46+
val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2)))
47+
48+
val trainInput = tf.learn.Input(UINT8, Shape(-1))
49+
50+
val architecture = tf.learn.Flatten("Input/Flatten") >>
51+
tf.learn.Cast("Input/Cast", FLOAT32) >>
52+
tf.learn.Linear("Layer_0/Linear", 128) >>
53+
tf.learn.ReLU("Layer_0/ReLU", 0.1f) >>
54+
tf.learn.Linear("Layer_1/Linear", 64) >>
55+
tf.learn.ReLU("Layer_1/ReLU", 0.1f) >>
56+
tf.learn.Linear("Layer_2/Linear", 32) >>
57+
tf.learn.ReLU("Layer_2/ReLU", 0.1f) >>
58+
tf.learn.Linear("OutputLayer/Linear", 10)
59+
60+
val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64)
61+
62+
val loss =
63+
tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") >>
64+
tf.learn.Mean("Loss/Mean") >>
65+
tf.learn.ScalarSummary("Loss/Summary", "Loss")
66+
67+
val optimizer = tf.train.AdaGrad(0.1)
68+
69+
// Directory in which to save summaries and checkpoints
70+
val summariesDir = java.nio.file.Paths.get((tempdir/"mnist_summaries").toString())
71+
72+
73+
val (model, estimator) = dtflearn.build_tf_model(
74+
architecture, input, trainInput, trainingInputLayer,
75+
loss, optimizer, summariesDir, dtflearn.max_iter_stop(1000),
76+
100, 100, 100)(trainData)
77+
78+
```
79+
80+
- Build feedforward layers and feedforward layer stacks easier.
81+
82+
**Usage**
83+
84+
```scala
85+
86+
import io.github.mandar2812.dynaml.tensorflow._
87+
import org.platanios.tensorflow.api._
88+
//Create a single feedforward layer
89+
90+
val layer = dtflearn.feedforward(num_units = 10, useBias = true)(id = 1)
91+
92+
//Create a stack of feedforward layers
93+
94+
95+
val net_layer_sizes = Seq(10, 5, 3)
96+
97+
val stack = dtflearn.feedforward_stack(
98+
(i: Int) => dtflearn.Phi("Act_"+i), FLOAT64)(
99+
net_layer_sizes)
100+
101+
```
102+
103+
104+
#### Batch Normalisation
105+
106+
[Batch normalisation](https://arxiv.org/abs/1502.03167) is used to standardize activations of convolutional layers and
107+
to speed up training of deep neural nets.
108+
109+
**Usage**
110+
111+
```scala
112+
import io.github.mandar2812.dynaml.tensorflow._
113+
114+
val bn = dtflearn.batch_norm("BatchNorm1")
115+
116+
```
117+
118+
12119
#### Inception v2
13120

14121
The [_Inception_](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf) architecture, proposed by Google is an important
@@ -18,6 +125,8 @@
18125

19126
DynaML now offers the Inception cell as a computational layer.
20127

128+
**Usage**
129+
21130
```scala
22131
import io.github.mandar2812.dynaml.pipes._
23132
import io.github.mandar2812.dynaml.tensorflow._
@@ -36,6 +145,26 @@
36145

37146
```
38147

148+
In a subsequent [paper](https://arxiv.org/pdf/1512.00567.pdf), the authors introduced optimizations in the Inception
149+
architecture, known colloquially as _Inception v2_.
150+
151+
In _Inception v2_, larger convolutions (i.e. `3 x 3` and `5 x 5`) are implemented in a factorized manner
152+
to reduce the number of parameters to be learned. For example the `3 x 3` convolution is expressed as a
153+
combination of `1 x 3` and `3 x 1` convolutions.
154+
155+
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/conv-fact.png)
156+
157+
Similarly the `5 x 5` convolutions can be expressed a combination of two `3 x 3` convolutions
158+
159+
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/conv-fact2.png)
160+
161+
162+
### 3D Graphics
163+
164+
**Package** [`dynaml.graphics`](https://github.com/transcendent-ai-labs/DynaML/blob/master/dynaml-core/src/main/scala-2.11/io/github/mandar2812/dynaml/graphics/plot3d/package.scala)
165+
166+
Create 3d plots of surfaces, for a use case, see the `jzydemo.sc` and `tf_wave_pde.sc`
167+
39168

40169
### Library Organisation
41170

0 commit comments

Comments
 (0)