-
Notifications
You must be signed in to change notification settings - Fork 270
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Initial commint - files upload * Create README.md * Create requirements.txt * Update README.md * Create README.md * Update README.md * Add example image * Create README.md * Update README.md
- Loading branch information
1 parent
8372f90
commit c156707
Showing
41 changed files
with
447,356 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,14 @@ | ||
# deep-learning-notes | ||
Experiments with Deep Learning | ||
Experiments with Deep Learning and other resources: | ||
|
||
* [keras-capsule-pooling](keras-capsule-pooling) - an after-hours experiment in which I try to implement Capsule pooling for images. | ||
* [max-normed-optimizer](max-normed-optimizer) - an experimental implementation of an interesting gradient descent optimizer which normalizes gradients according to their norms. Contains various experiments which show potential power of this method. | ||
* [selu-regularization](selu-regularization) - a Keras Regularizer Layer which allows for forcing SELU like regularization on the model weights (Dense and Conv2D versions are provided). Selu was introduced as an activation function with special initialization method, those regularizers can be add to force the weight to preserve self normalizing property during the training. | ||
* [tf-oversampling](tf-oversampling) - example with how to implement oversampling with tf.data.Dataset API. | ||
|
||
# Seminars on Deep Learning and Machine Learning | ||
|
||
[Seminars](seminars) - contains a bunch of presentations I have gave at our company. | ||
|
||
|
||
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Abstract | ||
|
||
A preliminary material on the basics of Deep Learning and Machine Learning. | ||
Slides cover things like graphs, backpropagation, optimizers, regularization etc. |
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+18.6 MB
seminars/2017-06-Introduction-to-Variational-Autoencoders/slides.pptx
Binary file not shown.
212 changes: 212 additions & 0 deletions
212
...ng-neural-nets-and-orthogonal-initialization/notebooks/SELU-self-normalization-demo.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
20 changes: 20 additions & 0 deletions
20
...7-Self-normalizing-neural-nets-and-orthogonal-initialization/notebooks/SeluRegularizer.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
from keras.regularizers import Regularizer | ||
import keras.backend as K | ||
|
||
|
||
class SeluRegularizer(Regularizer): | ||
|
||
def __init__(self, mu=0.001, tau=0.001): | ||
self.mu = K.cast_to_floatx(mu) | ||
self.tau = K.cast_to_floatx(tau) | ||
|
||
def __call__(self, x): | ||
|
||
mean_loss = self.mu * K.mean(K.square(K.sum(x, axis=0))) | ||
tau_loss = - self.tau * K.mean(K.log(K.sum(K.square(x), axis=0) + K.epsilon())) | ||
|
||
return mean_loss + tau_loss | ||
|
||
def get_config(self): | ||
return {'mu': float(self.mu), | ||
'tau': float(self.tau)} |
Oops, something went wrong.