Skip to content

Commit

Permalink
polishing (#2)
Browse files Browse the repository at this point in the history
* license added in setup.py

* index fixed

* index fixed

* blind guesses

* remove .egg-info from git

* .egg-info added to .gitignore

* _proc removed from git and added to .gitignore

* polishing
  • Loading branch information
davorrunje authored Sep 3, 2022
1 parent 6687f36 commit 8a5f3eb
Show file tree
Hide file tree
Showing 7 changed files with 115 additions and 131 deletions.
47 changes: 22 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
mono-dense-keras
Constrained Monotonic Neural Networks
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
Expand Down Expand Up @@ -71,7 +71,7 @@ def generate_data(no_samples: int, noise: float):
y = x[:, 0] ** 3
y += np.sin(x[:, 1] / (2*np.pi))
y += np.exp(-x[:, 2])
y += 0.1 * rng.normal(size=no_samples)
y += noise * rng.normal(size=no_samples)
return x, y

x_train, y_train = generate_data(10_000, noise=0.1)
Expand All @@ -98,9 +98,6 @@ model.add(Dense(128, activation="elu"))
model.add(Dense(1))
```

/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

We’ll train the network using the `Adam` optimizer and the
`ExponentialDecay` learning rate schedule:

Expand All @@ -121,25 +118,25 @@ train_model(model, initial_learning_rate=.1)
```

Epoch 1/10
313/313 [==============================] - 1s 2ms/step - loss: 9.1129 - val_loss: 9.2426
313/313 [==============================] - 1s 2ms/step - loss: 9.1098 - val_loss: 9.2552
Epoch 2/10
313/313 [==============================] - 1s 2ms/step - loss: 7.8147 - val_loss: 8.4241
313/313 [==============================] - 1s 2ms/step - loss: 7.7995 - val_loss: 8.3143
Epoch 3/10
313/313 [==============================] - 1s 2ms/step - loss: 7.5386 - val_loss: 8.0456
313/313 [==============================] - 1s 2ms/step - loss: 7.5270 - val_loss: 8.0499
Epoch 4/10
313/313 [==============================] - 1s 2ms/step - loss: 7.0556 - val_loss: 7.3737
313/313 [==============================] - 1s 2ms/step - loss: 7.2095 - val_loss: 7.5935
Epoch 5/10
313/313 [==============================] - 1s 2ms/step - loss: 5.8239 - val_loss: 4.4619
313/313 [==============================] - 1s 2ms/step - loss: 6.0665 - val_loss: 6.7911
Epoch 6/10
313/313 [==============================] - 1s 2ms/step - loss: 2.3169 - val_loss: 1.5477
313/313 [==============================] - 1s 2ms/step - loss: 3.1178 - val_loss: 1.5964
Epoch 7/10
313/313 [==============================] - 1s 2ms/step - loss: 1.0104 - val_loss: 0.7407
313/313 [==============================] - 1s 2ms/step - loss: 1.1686 - val_loss: 0.8541
Epoch 8/10
313/313 [==============================] - 1s 2ms/step - loss: 0.6479 - val_loss: 0.9755
313/313 [==============================] - 1s 2ms/step - loss: 0.7370 - val_loss: 1.5969
Epoch 9/10
313/313 [==============================] - 1s 2ms/step - loss: 0.5144 - val_loss: 0.3477
313/313 [==============================] - 1s 2ms/step - loss: 0.6011 - val_loss: 0.3739
Epoch 10/10
313/313 [==============================] - 1s 2ms/step - loss: 0.3837 - val_loss: 0.2306
313/313 [==============================] - 1s 2ms/step - loss: 0.4458 - val_loss: 0.3114

Now, we’ll use the
[`MonotonicDense`](https://airtai.github.io/mono-dense-keras/layers.html#monotonicdense)
Expand Down Expand Up @@ -179,22 +176,22 @@ train_model(model, initial_learning_rate=.001)
```

Epoch 1/10
313/313 [==============================] - 1s 2ms/step - loss: 0.3578 - val_loss: 0.1557
313/313 [==============================] - 1s 2ms/step - loss: 0.3646 - val_loss: 0.2042
Epoch 2/10
313/313 [==============================] - 1s 2ms/step - loss: 0.2650 - val_loss: 0.2754
313/313 [==============================] - 1s 2ms/step - loss: 0.2895 - val_loss: 0.1387
Epoch 3/10
313/313 [==============================] - 1s 2ms/step - loss: 0.2238 - val_loss: 0.0917
313/313 [==============================] - 1s 2ms/step - loss: 0.2756 - val_loss: 0.1027
Epoch 4/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1867 - val_loss: 0.0798
313/313 [==============================] - 1s 2ms/step - loss: 0.2281 - val_loss: 0.0814
Epoch 5/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1724 - val_loss: 0.7418
313/313 [==============================] - 1s 2ms/step - loss: 0.1816 - val_loss: 0.0634
Epoch 6/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1356 - val_loss: 0.4525
313/313 [==============================] - 1s 2ms/step - loss: 0.1631 - val_loss: 0.1443
Epoch 7/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1533 - val_loss: 0.0503
313/313 [==============================] - 1s 2ms/step - loss: 0.1455 - val_loss: 0.1299
Epoch 8/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1274 - val_loss: 0.0395
313/313 [==============================] - 1s 2ms/step - loss: 0.1701 - val_loss: 0.0709
Epoch 9/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1171 - val_loss: 0.0381
313/313 [==============================] - 1s 2ms/step - loss: 0.1250 - val_loss: 0.0644
Epoch 10/10
313/313 [==============================] - 1s 2ms/step - loss: 0.1005 - val_loss: 0.0500
313/313 [==============================] - 1s 2ms/step - loss: 0.1426 - val_loss: 0.0405
Loading

0 comments on commit 8a5f3eb

Please sign in to comment.